Understanding the intersection of AI and simulation – Part 1 Transcript
In this special miniseries, host Spencer Acain is joined by Todd Tuthill, Vice President for Aerospace and Defense and Marine Industry at Siemens, Dr. Justin Hodges, Senior AI/ML Technical Specialist at Siemens as well as Fatma Kocer-Poyraz, Vice President of Engineering Data Science at Altair to explore the ways AI/ML will complement traditional simulation and enhance design space exploration going forward. Check out the transcript of that talk below, or click here to listen to the episode.
Spencer: Hello, and welcome to the AI Spectrum podcast. I’m your host, Spencer Acain. In this special episode, we are joined by experts from Siemens and Altair to explore the impact of AI in simulation. Before we get to our main topic, I’d like to give you all a chance to introduce yourselves. Todd, maybe you’d like to start things off for us?
Todd: Thank you, Spencer. This is Todd Tuthill I’m the Vice President for Aerospace Defense and Marine Industry at Siemens, and I’m really happy to be here.
Spencer: Great. Justin, maybe you could tell us about yourself a little bit.
Justin: Hello again, Justin Hodges here. I’m the AI/ML Technical Specialist at Siemens Digital Industry Software.
Spencer: Our special guest from Altair, Fatma, could you tell us a little bit about yourself?
Fatma: Hi, Spencer. Thank you for introducing me. I’m leading the Engineering Data Science team at Altair, and my background is Multidisciplinary Design Exploration Optimization.
Spencer: Great. Todd, could you tell us all a little bit about why we’re here today? What are we doing here learning about AI and simulation?
Todd: Yeah, I think, thank you, Spencer. We’re doing a couple of things. First off, I want to say welcome to Fatma, and not just welcome to Fatma, but welcome to the Altair team. I for one, I’m so excited about the joining of these two companies. I think there’s so much incredible things we can do together as a team. As we come together, bringing your capabilities and simulation with our capabilities and simulation and industrial software, you bring us high-performance computing, data science, artificial intelligence, all kinds of incredible stuff, and we put that together in Siemens Xccelerator. We’re creating the world’s most complete AI-powered industrial design simulation portfolio. It’s just incredible. I’m excited for us, I’m excited for our customers. I’m excited to now say it’s official. Siemens is the number one industrial software company in the world. It’s all that talk and all that stuff, but I can say for people listening who maybe don’t know the people at Altair, I’ve had the opportunity to meet people like Fatma and other people at Altair. They’re great people, and they fit the culture of Siemens. I’m just really excited about this opportunity to introduce the world, to what the combination of Siemens and Altair is together. Really excited and to talk about what it means for AI, the combination of AI and simulation. So it’s going to be fun. Can’t wait to do it.
Fatma: Todd, I have to mention that on behalf of all Altairians, we share the same sentiments and same excitement.
Spencer: Thank you, Todd, for that great introduction, those great comments. So to kick things off, Fatma, with your extensive experience with both simulation and AI, how would you describe the synergies between those two fields, AI and simulation?
Fatma: I think there’s a great synergy between AI and simulation. The two technologies really augment each other, they complement each other as two kits in an engineer’s toolset. So to give you one example, we have of course customers, engineers that have been running simulations in the past 10, 15, 20 years, and they have accumulated this historical simulation data, and they’ve used them to make design decisions that one time when they ran the simulation, and they’ve been wondering how they can make more use out of this historical data set. In the past, with traditional machine learning methods, this was not available to us, but we have recently implemented Geometric Deep Learning Engine, Altair PhysicsAI, that can take historical simulation data, that cannot be parameterized and that is not parametric, and be able to train machine learning models that allows them to do quick design exploration. When they’re at the early design phases, what engineers want to do is explore a lot of ideas. They want to run a lot of what-if scenarios. What if I change this? What if I make it thicker? What if I change the material? What if I put a curvature? When you’re using high-fidelity simulations, sometimes due to the time that it takes, you may not be able to explore so many ideas so quickly, but from this exploration, you learn so much, you identify promising high-potential candidates, then you focus on that design direction, and you know what works for your design improvements and what doesn’t work, and then you can proceed in that design direction, and when you have designs that fulfill your requirements, of course, we always suggest that you go and run a high-fidelity physics-based simulation. In return, when you’re running these simulations, you are now increasing your dataset that is available to you to train machine learning models. That’s how they augment each other. We use AI models to do quick design exploration that allows you to explore thousands of designs really quickly, and then we use simulation to validate those designs or to generate optimal datasets for ML model training. We have customers that uses these physics AI models to give quick quotations to their customers and reliable quotations, and we have customers that actually uses these machine learning models to be able to explore different ideas, different manufacturing processes in multiple physics, NVH, impact, CFD, high and low-frequency electromagnetics, and manufacturing processes, which would normally take them days, now takes them sometimes seconds to minutes, and they know what ideas, what design alternatives work, and what design alternatives may not be ideal for that situation, and now they have a design direction that is going to meet the requirements that they can optimize. In the same sense, we’re adding generated functionality to physics AI that will allow engineers to generate new designs of which they can be inspired or they can put an optimization loop to find optimal designs.
Justin: It’s great to hear so many experiences that you have to learn from you. I think the audience is very lucky, self-included to hear you share that. I don’t know if it’s good or bad, but I always think about it in this simplified way, at least in terms of categories. There’s one category where, like you said, you have historical data of simulations that you’ve run, and you make this ML surrogate, and you use it for a rapid screening of the space. I love how you said quotation. From there, you go off and validate with Sim, right? But I also like this really similar for design teams, but it is a different use case. We’ll do the simulation exactly as you always have done, but use the methods that you explore your design space with. Swap those out with machine learning methods. Of course, they’re better than things like random sampling or Latin hypercube sampling. I mean, our design space is dang near infinite, and our time is always less than we want it to be for a project. So it makes sense to reach for these high-fidelity machine learning approaches at picking which cases to run, and then we run them in simulation as we always do. So the first thing I think about is heeds AI simulation predictor. And at just a generic highly abstracted level, like what’s happening with that is we’re doing an optimization process. We have Sherpa or some other class-leading algorithm to do the optimization itself, and behind the scenes, it’s a simulation, instead of simulations running orchestrated by optimization algorithm. You can use these machine learning models to, as you’re doing your optimization process, picking different points to run to get better and better and more optimal designs. Using machine learning model to say, “That looks similar to ones you’ve already ran. I can make an accurate prediction for that point.” So you can go ahead and just take a one-second prediction from a machine learning model, and then let’s quickly get on to the next point and see if we can reach into new territory and better designs. And we run simulations for those. And so this way where you’re running simulations as you always would, but you have this under-the-hood AI component that runs machine learning models for you, if possible, at its discretion, means that you can evaluate maybe thousands of designs in a period where you could only previously do hundreds. And so it does help you explore your space more broad with these intelligent methods, still leveraging simulation as always.
Fatma: The speed that AI machine learning models bring to our understanding of product performance is really truly impactful. We have customers that would take half a day to simulate a certain design scenario, but now with these physics AI models that is reduced to mere seconds. And with those models, they are exploring so many more design scenarios, finding so many opportunities that they may lose otherwise, improving their designs. And that’s being very impactful in coming up with better designs.
Justin: Yeah, I mean, I think there’s a fundamental truth over everything that we’re doing, right? All models are wrong. Some are useful. And that could be a machine learning model, a statistical model, a simulation model, right? And so we’re really talking about things working together fit for purpose. And I think we’ve described how that could work in one case where you’re training a surrogate and exploring the space, validating the simulation later, or doing SEM as always, but you’re really guiding what SEMs to do in a design exploration exercise with higher class, newer, more advanced techniques like machine learning models.
Fatma: And then of course, we always say, you need to go back and run the high fidelity, precise simulations to validate those points. So don’t ever use the AI model results as is. Use AI to find promising designs, promising directions, and then take those and run through high fidelity simulations. And that’s really what I mean when I say they each optimize each other. I don’t know if that resonates with our users and with you here.
Todd: I think it certainly resonates with me, Fatma. And I guess if I think about that and how customers would use it, could you go a little further into the value? What value does that bring customers? How does that help them in the development of their products?
Fatma: Sure. We have some customers, for example, that could only afford to run one high fidelity simulation because of the time that it takes and the competition resources that it requires. Imagine you had to make design decisions with one data point. That’s where we’re combining the power of data science with physical computational sciences, to be able to use both the technologies, to explore ideas using machine learning models really quickly, and take the outcome of that to run the high fidelity simulation so that you don’t have to rely on just one data point to make design decisions. So we have customers, for example, that uses these machine learning models anywhere from the quotation phase, to be able to give quick quotes to their customers, to all the way to designing new products that does not look like any other previous products that requires them to explore many more designs with maybe multiple manufacturing processes to understand the trade-off between the cost and performance. The AI models enables you to do these really quickly, give you the directions in the design that will help you to balance cost and performance and any other things that you have to consider for your product. And then once you have these high potential designs, designs that are most likely going to work, of course, we always suggest that you go and validate them with precise simulations so you get the right performance.
Justin: I like one of the things you said about exploring design space. So I’m going to comment on that and then ask you a follow-up question. And I think that’s one thing people have recently started to discover, is we have all these methods to tear into our design space in terms of sampling methods. But a lot of times it’s really static and ineffective. You set up the range of the cases you want to run for simulating in your design space and then off you go. The results are what they are. But we have a lot of these convenient pieces of the machinery in our machine learning models, like gradients and variants and different ways to look at how our results change in the space. And when you have those tools, it allows you to really, in an adaptive, like adept way, like converge on what cases to run because you see the results changing the most. You can sort of economically place those samples there accordingly. But the question I wanted to post you on this to clarify a definition for synthetic data, I don’t think it gets talked about enough. And I think it’s a really powerful use case. So maybe for myself and everyone else listening, can you just put out a definition as a line in the sand for when you say synthetic data in this context, what are we talking about?
Fatma: So when I say synthetic data, I mean data that is generated to be able to train good machine learning models. So when we look at, for example, historic simulation data, there is a lot of information that’s captured there. But it also tends to be not as balanced as needed for machine learning algorithms to train good models. It may not have as much variety in it as needed again for the same purpose. So for example, we have spent some time in extensible sampling methods, meaning you can use sampling, which I agree with you, Justin, that it is something that is very underappreciated, but very critical for machine learning. Because there’s a saying that better data beats fancier algorithms. If you have a good data set, you can learn something from it, even with a simple regression model. But if you don’t have a good data set, no matter how fancy your deep learning architecture is, you’re not going to get anything useful out of it. Because the data does not represent the design space. The data is not balanced. The data may not have enough variety in it. So curating your data sets is the most critical step in any machine learning process, of course, even more so for product development. So sampling methods are great ways to augment your data sets to get optimal distribution in the design space. We also look into the distribution of the historical simulation data, which is what we can do with our geometric deep learning engines, which is also very critical functionality for product design, the ability to leverage the simulation data that you’ve generated in the past to use it for feature design decisions. We also look at that data, the distribution of that data. And when we find outliers, for example, we can do one of the few things. We can look at that outlier and identified as an anomaly behavior and maybe eliminate it. But we can look at that outlier and realize that there’s some potential in that design space and augment the data set with simulation data around that outlier. So we have a balanced data sets that will allow us to train good predictive models using machine learning.
Justin: That yeah, balance and distribution. These are terms not appreciated enough. I mean, think about how you’re getting pulled in two different directions, right? On one end, if you’re trying to design the best wing possible for your aircraft or whatever it may be, you’re going to tune your geometric parameters and identify which ones are the best and hone in on those. So you’re going to start fixing parameters to get the best performance, right? If you’re designing your product, that’s what you do. So by definition, if you use those data sets for machine learning, they’re super asymmetric, they’re not exploring big ranges for the parameters that you locked in to keep the best performance clear. So by definition, they’re not balanced. You notice this in the distribution. So yeah, point well taken, you need to add data, synthetic or otherwise, so that you can make a fit for purpose machine learning model. Otherwise, you’re going to go off and use it in a way that’s, as far as it sees, a new territory, right? And then that’s where you start to worry about model prediction accuracy.
Fatma: Exactly. If it doesn’t know a territory, then it’s not going to be able to tell you you know, anything about that part of the design space, which may actually have good outcomes. Exactly.
Spencer: Thank you all for that incredible context on the relationship between simulation and artificial intelligence. And with that, though, I think we are just about out of time for this first special episode. Once again, I have been your host Spencer Acain on the AI Spectrum Podcast. Tune in again next time as we continue this exciting discussion on what impact artificial intelligence will have on the world of simulation in the future.
Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.


