In this special guest feature from Scientific Computing World, Jan Rowell looks at the growing use of digital twins which is being driven by the use of HPC and IOT.
The concept of a digital twin—the mirroring of a physical object with a virtual object created by simulation-based engineering—has been around since NASA began using numerical simulation technology and comparing the results to ground-based physical models as it developed and managed early spacecraft.
Today, digital twins are hot—a subject of intense interest and investment. Gartner puts digital twins fourth on its list of the top 10 strategic technology trends for 2019. A February 2019 research report indicates that three-fourths of organisations that are implementing the Internet of Things (IOT) are already using digital twins already or plan to do so within a year. The global digital twin market is experiencing a 40 per cent cumulative annual growth rate (CAGR), and will reach USD 15.66 billion by 2023.
The shift is being driven by advances in high-performance computing (HPC), IOT, simulation methods such as reduced-order modeling (ROM), and other technologies. Together, these advances have paved the way to ‘second-generation digital twins,’ which combine operational data from connected assets with model-based physics based on numerical simulation. Robust, second-generation twins are expanding simulation’s focus from engineering better products to operating products more effectively, and their impact is reaching far beyond simulation’s traditional realm of complex, high-value product manufacturing.
A New Era for HPC-Powered Simulation
Second-generation, simulation-based digital twins represent both a new era for simulation and a leap forward in digital twin functionality, according to Jacques Duysens, who leads digital twin business development for Ansys in Europe and Middle East. ‘A simulation-based digital twin is a connected, virtual replica of an in-service physical asset,’ Duysens says. ‘It’s an integrated system simulation that can encompass thermal, electromechanical, and other relevant physics, either coupled or not, along with sensor data from the operating assets as well as historical and other data relating to the assets throughout their lifetimes. It results in pervasive simulation—continuous simulation with all physics across the entire lifecycle for all products.’
Today’s digital twins can mirror the experience of individual assets in actual operating conditions in real time or near real time, depending on factors including the twins’ complexity. For example, an energy company can create a twin of each individual turbine in a wind farm and study in the digital environment how each is affected by its unique location in the field. Organizations can mirror the operation of a component, a system, an assembly line, or an entire manufacturing plant, and can troubleshoot problems down to the 3D component level. Examples of digital twins are more and more numerous, and include pumps, wind turbines, electrical motors, alternators, and other assets.
This ability to simulate complex products and systems in their working environments offers a strong value proposition. Enterprises can impact the bottom line through improved operations costs, with many customers telling Duysens that using validated digital twins in operations can unlock up to 20 per cent efficiency improvements and reduction in maintenance costs. Enterprises can also potentially achieve top-line advantages through increased product innovation and new, predictive maintenances services. And the value is growing as advancing technologies make large-scale, real-time digital twins an increasingly practical reality.
Digital twins provide a powerful basis for organizations to increase the return on their investments in HPC, modeling, simulation, IOT, and analytics,” says Marie-Christine Sawley, HPC expert in Intel’s Data Center Group who is leading the Intel Exascale Lab in Paris and managing the collaboration with the Barcelona Supercomputing Center. “Digital twins are an excellent example of the new, data-rich applications that are delivering continuous intelligence and radically improved decision-making across the enterprise. As chair of the ISC 2018 Industry Day, I was really impressed by the progress and the quality of innovation made as the solutions developed by independent software vendors (ISVs) increase in complexity and power. We are seeing great interest and clearly expressed expectations from the industrial community, for example from user organizations in aeronautics, energy, and monitoring. With the convergence of HPC, analytics, and AI, digital twins will continue to evolve in exciting new ways.”
Use Cases in Product Engineering and Operations
Combining simulation-based digital twins with IOT data from operating assets allows for innovative use cases in both development and operations. Design and engineering teams can feed operational data into their simulation model, to validate and verify their engineering models and proceed with greater confidence. Teams can gain deeper insights into how their products hold up in real-world conditions, enabling them to explore potential new features and increase the utility and reliability of next-generation products. They can implement virtual sensors and use them to predict asset behavior in environments where real sensors are not feasible.
Operations teams can use digital twins to perform predictive maintenance. They can study wear and tear on operating assets, to identify problems before they become symptomatic. Digital twins can be integrated into solutions that generate alerts and identify optimal times to perform routine maintenance, implementing strategies that increase uptime and reduce maintenance costs. Ops teams can troubleshoot problems through what-if scenarios, simulating possible solutions in the twin before making adjustments or repairs to the physical asset.
With real-time or near real-time responsiveness, digital twins can also be used to optimize the performance of individual assets and fleets of assets. This can help extend the life of individual assets and balance performance across a group of assets, perhaps reducing the number of assets required to meet a given workload.
Beyond Traditional Simulation Users
Early adopters of digital twin technologies have come from simulation’s traditional base of companies that manufacture or deploy complex, high-value assets—but the market for digital twins is expanding rapidly. Products are becoming increasingly sophisticated and intelligent, leading more manufacturers to use simulation and digital twins to help create and manage them. In addition, numerous industries are finding innovative ways to create and deploy digital twins.
In healthcare, for example, digital twins are being used not only to optimize the design of medical devices but also to study personalised models that simulate the fluid dynamics of an individual’s heart. Surgeons can, in effect, convert a scanned image and other information into a digital twin, then try out different treatments and strategies on the twin before conducting surgery.
Networks of digital twins are helping architects optimize building design and urban planners improve mass transit and disaster management. Simulation-based twins are also being used in training scenarios, particularly when training on real-life assets would be hazardous or costly.
HPC and Reduced-Order Modeling
From building a digital twin to harnessing the data that twins produce, digital twins are a massive big data challenge—one that requires high-performance infrastructure from the enterprise edge to the cloud or data centre. HPC supports the distributed computing pipeline that goes into creating digital twins, ingesting and processing sensor data, merging it with diverse sources of enterprise data, analysing it to deliver actionable insights, and visualizing it to promote rapid responses.
HPC infrastructure and progress in the simulation methodology known as reduced-order modeling (ROM) are enabling developers and engineers to create increasingly detailed models and run the sophisticated simulations that form the basis for reduced-order modes that can be implemented within digital twins. ROM uses complex mathematical algorithms to simplify high-fidelity models while preserving essential behavior. Together, HPC and ROM enable operations teams to mirror deployed assets in near real time or real time depending on a twin’s complexity. In the design space, they allow designers to do more simulation runs and generate large-scale optimization plans, to design more reliable and innovative products and bring them to market more quickly.
HPC-powered analytics, including machine learning and other forms of AI, are also essential to deriving value from the large data volumes generated by operational twins. The analytic results—presented through simple alerts as well as state-of-the-art visualization and augmented reality (AR) technologies—can contribute to actionable insights that help optimize everything from an individual asset’s lifespan to the operation of an entire factory. In addition to running the analytics, HPC platforms are used to train machine learning models that can identify previously unforeseen patterns in digital twin data. HPC-enabled AI will play an important part in creating next-generation twins that blend top-down approaches based on first-principles-based simulations with bottom-up approaches deriving models from sensor data.
Targeting Performance and Ease of Use
Continued breakthroughs in HPC and simulation technologies are helping to further the development and use of digital twins. ANSYS recently expanded its simulation toolset with a product called Twin Builder that Duysens says facilitates the work of building, validating, and deploying multi-domains and multi-physics digital twins. On the hardware front, Intel has disclosed forthcoming platform technologies optimised for data-centric workloads including HPC and AI, among others. The new technologies include 2nd Generation Intel Xeon Scalable processor (formerly known as Cascade Lake), which Sawley says will incorporate a multi-chip package with up to 48 cores per CPU and 12 DDR4 memory channels per socket as well as Intel Deep Learning Boost to accelerate deep learning use cases. Sawley also points to Intel Optane DC persistent memory, which Intel plans to deliver with the 2nd Gen Intel Xeon Scalable processor. The persistent memory solution is designed to bring data sets closer to the CPU for faster time-to-insight and greater resiliency in the data centre.
These next-generation platform technologies are exactly what our customers need to run their large-scale numerical simulations, create reduced-order models, and provide cost-effective performance for the digital twin pipeline as we move forward,” says Duysens. “They will play an important role in meeting the growing demand for high throughput and compute performance for future digital twins and advancing simulations. We are optimizing our codes to take advantage of these capabilities and will continue to do so as we move toward exascale computing.”
With the growing use of digital twins bringing new organizations into the HPC community, Sawley and Duysens say their companies are introducing new solutions aimed at meeting the needs of novice and long-time HPC users. Intel has developed performance-optimized cluster reference architectures and is working with ecosystem partners to deliver verified, high-performance systems that simplify the work of purchasing and deploying HPC clusters. Ansys is collaborating with SAP to develop integrated solutions that link engineering and operations. The first result of their joint effort is SAP Predictive Engineering Insights* (PEI*) enabled by Ansys, a solution that embeds an Ansys runtime module into SAP PEI and runs on the SAP Cloud Platform. According to Duysens, the integrated solution allows for efficient deployment of digital twins and easier, more effective management of assets and data.
Barriers Falling, Benefits Growing
Barriers to digital twins are falling as HPC and other technologies become ever more powerful and simulation solution vendors expand their focus on software integration and ease of use. As the benefits of digital twins become more evident, we can expect this second-generation technology to drive continued growth for HPC, greater ROI for the industrial Internet, and transformative value for a wide range of industries.
Jan Rowell is an award-winning freelance writer who covers technology trends in HPC, artificial intelligence, and other areas.
This story appears here as part of a cross-publishing agreement with Scientific Computing World.
Sign up for our insideHPC Newsletter