Understanding the intersection of AI and simulation – Part 2 Transcript
Generative AI is proving to be a powerful force multiplier across many industries, allowing a single user to do more in less time and even highly complex tasks, like product design, are reaping generative AI-driven benefits. However, as powerful as these AI tools are, they must be carefully applied in conjunction with not in place of existing tools like simulation. Only together can AI and simulation achieve a better result than either could alone.
Check out the full episode here or keep reading for a transcript of that conversation.
Spencer: Hello, welcome to the AI Spectrum Podcast. I’m your host, Spencer Acain. In this special episode, we are joined by AI experts from Siemens and Altair to look at the impact that artificial intelligence will have on some world of simulation going forward. Today I’m joined by Todd Tuthill, Vice President of Aerospace and Defense and Marine Industries Siemens, Justin Hodges, Senior AI/ML Technical Specialist at Siemens, and Fatma Kocer, Vice President of Engineering, Data Science at Altair. To pick things up from where we left off in the last episode, I’d like to invite Todd with his extensive industry experience to enlighten us on where the role of generative artificial intelligence and physics-based AI kind of start and end in industry. You know, which one comes first? Is it artificial intelligence? Is it physics-based simulation? Neither, both? Please tell us more, Todd.
Justin: Spencer, good question. I think before I let Todd educate all of us, I just want to add some words here to these terms, right? So, physics-based simulation, right? We’re viewing it as this precise simulation of higher fidelity, of higher accuracy. Now that works in conjunction with ML surrogates, you know, these and they fit different roles, right? You know, we’re getting this directionally accurate, good at spotting trends, good at distinguishing relative performance from one design to the next. Those are things ML is very good at, right? But in terms of, you know, the king of validation, trustworthiness, precision accuracy, high fidelity, whatever you want to put the term there, right? That is what we’re calling here physics-based simulation. So sorry to interrupt, Todd, but I’ll pass the ball there to you as far as either or both sort of choice.
Todd: No, thank you, Justin. That’s a great clarification. And let’s talk about kind of where we’re at with the state of the art right now in simulation and AI and talk about a little history and kind of where this debate’s at, what caused the debate, where it’s coming from. And when I think about this, there’s this really, I see two sides to it. There are people that say, well, you talk about the preciseness of the simulation, the precise math and base simulation. And there are people that think, well, that’s it, right? That’s what we’re familiar with, that’s how we’ve developed complex products. You go back to Isaac Newton, you know, engineering and design foundation of our modern physics and math-based society and engineering kind of came from the work that the foundational work he did. And we’re very comfortable with that. And there are many people who say, yep, that’s what we’ve done. That’s how we put rockets in the air. That’s how we put jets in the air. We should continue with that. What’s the role for AI and ML anyway when we now have very, very powerful computers, hyperscalers, let’s let them solve these difficult math and physics-based problems. So that’s kind of one side. And then there’s the whole other side of the discussion where people are saying, well, yeah, we’ve done math and physics, but that’s kind of how my grandfather did engineering, right? What’s the modern way to do engineering? The modern way to do engineering is data. It’s all about the data. You and Fatma had a great discussion just a moment ago about quality data. You talked about synthetic data to train models. And if there’s this thought and belief in the industry that if I just have enough data, if I have quality data and I put it into a great algorithm, that algorithm is going to teach itself to do physics and basically create kind of a digital, Issac Newton and kind of come up with physics and be able to do the simulation without all of the work and be able to make the decisions that we look at precise math and physics from. And basically that ML and AI is going to get smart enough to do all this work and we really don’t need the traditional kind of precise math, physics-based simulation we’re used to. That’s the two sides. Okay, so now you look at where we’re going at Siemens with the acquisition of Altair. What’s our position? And at Siemens, we think the answer is both. It’s not one or the other. If you look at, I look at it in simulation, you know, this is a podcast specific on simulation, but I’ll talk for a second about the broader Xccelerator Portfolio and how I see these tools being used in industry today. We at Siemens have a long history. We lead the world, as I said, in industrial software, in CAD, in CAE, in electrical development, electrical design, in simulation, and those are all math, physics-based tools that we have a long history of developing that’s trust in the industry. But we’re not just going to leave those tools static. We’re investing, we’re doing things to make those tools better all the time. And one of the things that we’ve been investing in for quite a while, one of the key things with the Altair acquisition was to really help us supercharge the way we augment those tools with simulation. So it’s not one or the other. It’s both. It’s the foundation, the trust, the long history and engineering of all that precise math and physics. That’s absolutely required and will always be required, but we can make that better. We can augment it. We can make it easier to use and like Fatma was talking about ways to bring more value to customers by augmenting it with AIML. That’s what we see the future of this industry and the future of industrial-based software.
Fatma: Yeah, I don’t have much more to add to Todd and Justin’s response, but the way I summarize it is product design is a very complex process with many design decisions to make and with many constraints to fit the design into. And simulation is a powerful tool that helps us to take these design decisions faster. And AIML is a new tool in our toolbox. And as engineers, we have to use them appropriately and sort of multiply the effect of fault by using them in the right time for the right problem.
Justin: And I don’t think we need to be scared or treat this as magic because it resembles probably a lot of things most people touched on in their undergrad in engineering. What’s the difference between log law or power law or exponential or whatever sort of regression model you use all the time thoughtlessly in simple exercises in school versus a neural network, right? They have the same purpose in the end, fitting, regressing data with best fit possible. So categorically, same purpose. There’s laws of physics and fluid mechanics and material science and all these that dictate the form and the functional shape that we pick for some of these regression models that we’re used to that are more simple. The only nuance with machine learning, I guess the biggest nuance is that their functional form is so much more flexible and scalable. So the relationship is less intuitive and tied down to a physical law or phenomenon or modeling. But we shouldn’t forget, you know, we’re very familiar with these sort of regression models and exercises. And so it can be sort of demystified as this thing that’s categorically different because it’s used in a very similar way that we’ve been seeing in engineering for decades now. The capabilities combined with hardware improving is just that now we can put in much more sophisticated, flexible, higher scaling methods that can capture things that otherwise would just be insurmountably hard and large to capture.
Todd: Yeah, I hear people asking all the time about, you know, will AI replace this? Is AI going to make this obsolete? And one of the other questions that people ask me all the time, well, is AI going to replace engineers? Is AI going to put people out of work? And my answer to that is typically, no, I don’t believe AI is going to replace engineers. I don’t think AI is going to take your job away. But your job might be replaced by another engineer who knows how to use and leverage AI if you’re not willing to learn it. You know, like Fatma said, it’s a new tool in the toolbox. You know, it’s like rolling the clock back 40 or 50 years and say, well, I’ve been using a slide rule all my career. Do I really need to use a computer or a calculator? Well, yeah, probably you do. It’s a tool in the toolbox. And I see AI/ML as a tool like, you know, not unlike some of those things we added 40 or 50 years ago. It makes our jobs better. It makes it faster. And it allows us to focus on higher level things that we can kind of hand over to the AI/ML to do for us.
Spencer: Well, thank you, Todd, Justin, Fatma for those thoughtful answers on this interesting topic. You know, like you say, it’s a tool and just like any other in the toolbox. But we have talked about a lot of different kinds of models, these different kinds of tools, right? From generative AI to machine learning models to, you know, precise physics based simulation models. But there’s one we haven’t touched on yet, which is, of course, foundation models. So what are these industrial foundation models? Justin, can you maybe paint us a picture, give us some more details on, you know, what these types of models are?
Justin: Yeah, absolutely. And I would cherish Fatma’s response to extend that and add more to it. But basically, I think the most helpful way to start is painting this taxonomy, because you do have foundation models, right, universally understanding language, ChatGPT is the common example of what it can relate to. No matter what dialect, language, era of human history, you want to talk from ChatGPT can sort it out. And or some version of that can sort it out, understand you have a conversation with you. So that sense foundation models are obviously real. And the modalities will change from common conversation, talking about things that mainstream data and layman people understand to a modality of engineering, right, ChatGPT for engineers to say it really in a sort of silly way. But, you know, this modality that can speak the language of engineering process and understand the context of all these different industrial scenarios and data types, these modalities, 3d models, CAD drawings, 2d 2d models, 2d drawings, time series, data technical specifications, product specifications, etc. I mean, putting all of this into a very competent foundation model is what I know of as an industry foundation model. And these are real, these these are core central vision of many companies like Siemens and including Siemens. And this will be, you know, powering our agents and doing a lot of things for engineers. If you want to know more, I think actually there’s a specific part of Siemens called the Siemens AI Lab in Munich, which focuses purely on communication of these such things. So there’s plenty more to dive into there and other places. But the big difference is that’s not what, you know, we’re talking about when we say, you know, AI will not replace simulation because there is this misnomer and false information that’s out there in the market that foundation models for simulation will emerge these models that don’t need training, tweaking, generalize extremely well, extrapolate in space and extrapolate in time. And you know, just you will hand it supposedly to an aerodynamicist and say, here, you’ll need to do simulation. This model right here will just predict everything you need accurate enough. And I don’t know anyone that or anyone, you know, with technical credibility or that’s been working in the field, you know, that really believes in that, I think that there’s, you know, miscommunication and there’s a different vision of something that I think a lot of us believe in, which is in the next few years, we won’t have those models that are foundation models for simulation, but we will have these very narrow problem scope wise AI models machine learning surrogates. So if you care about a certain class of cars, and the aerodynamics, great, you can pump in a bunch of data very thoughtfully, very thoughtfully progress and tune and tweak the model with experience as you generate it from your design projects, and data science teams. And then when you go to do the next similar iteration of that car, that surrogate will still be useful and can be continually improved. And so it is what I would say is called a narrow foundation model. But this definition is really key, right? Foundation models, you know, they have this big aggressive criteria that’s really robust. Generalization into new domains, new problems, this extrapolation is based on time I mentioned, transferability to other problems without fine tuning. You know, that is not on the horizon, right? That is far away. And that’s much different than this industry narrow scope sort of foundation model, which will be very useful and probably occur soon. But yeah, like I said, I mean, Fatma, I cherish your response here. I would love to hear you add to that.
Fatma: Yeah, I think I 100% agree with your sentiment that a foundation model that is so generalizable that you could just pass to someone else, and they could use that without the need for any other toolset like simulation. I don’t see that happening. If somebody were to claim that, I would probably question if they’ve ever designed a product and know all the complexities of that product design process. So of course, we are at a stage where we are comfortably training machine learning models for a particular application, let’s say, you know, a truck hood, right, for impact. We have data, we train a model, and then we can use that model to make quick design exploration in how we should change the hood to reduce the pedestrian impact, for example. And then we find good designs and then we run high fidelity simulations. With using transfer learning, what I would call as pre-trained models, I think Justin, you use the term narrow foundation models, I think they’re about, you know, they’re comparable things, is that then we take this model with a much smaller data set, we can fine tune it for, you know, maybe an SUV hood, which may be a bit smaller or different dimensions. And that reduces the need for data, but still be able to use that pre-trained machine learning model. But to get the foundation model, you have to have so much data and so much training that it understands how mechanical parts work within that training data. And so that’s for what I think is that for product design, that’s really far from both a technology perspective, from data perspective and from computation perspective. Right now, we’re at a stage where we can train, you know, machine learning models for a certain application, we can deliver them as pre-trained models to do transfer learning for an extension of that application. But a foundation model is truly generalizable models. And I think that’s far for product development. And when we get there with respect to technology, data, and computation resources, it’s still simulation and foundation models will still go hand in hand, because we are engineers, we come up with new designs, new ideas, innovations. And for that, we don’t need the data to train a machine learning model that can replace the simulation, right? So we’re at a design idea that doesn’t have any data to train a machine learning model. So we would be running simulations to understand, you know, that innovative idea, design idea. And so it’s like a combination of inductive learning and deductive learning, right? Through simulations, we’ll understand how one behaves. And once we have, you know, a lot of them, we can actually train machine learning models. So it’s still going to be, they’re going to be augmenting each other, even if we were at that stage for product design, we would need both of them to sort of progress sort of side by side.
Todd: So as I hear you guys talking about this, I thought of a real customer use case and a conversation I had not too long ago with one of our aerospace customers who launches rockets. And we were talking about models, we were talking about simulation. And he looked at me and he said, Todd, here’s, here’s where I’m at. He said, when we launch a rocket, when we test a rocket, you know, it’s incredibly expensive to do it in the physical world. We want to do that better in the digital world. But here’s the trouble. He said, we’re melting aluminum, we’re turning aluminum into plasma. Every time we launch that rocket, and almost inventing and finding new aspects, new aspects of physics. So when a customer comes to us and says, here’s our problem, here’s our domain and aerospace, how, how can we help them apply AI/ML, and this precise physics based simulation to help them when they’re discovering new things. What, what would you say to that, that customer?
Fatma: I would say that customer, because of still exploring aspects of the design is, does not have enough data to train machine learning models to make good performance predictions. And they would have to continue accumulating data, whether that’s test data, simulation data, or a combination of them, until they have enough, enough data to train good machine learning models. But this is an example that I was, you know, maybe in an abstract way, trying to explain, we’re coming up with innovative ideas. And as we are learning about those ideas and how they behave, how they perform, how they can be improved, we don’t yet enough have enough data to train machine learning models.
Justin: There’s, there’s something I’m dying to add here, because I’ve been on the receiving end of this challenge for a long time. So when you talk about foundation models, you know, potentially, or I guess the misnomer of them coming on the scene and performing in this way, you know, let’s not forget the requirements that were always handed when we do these sort of ML surrogate exercises. For example, like you mentioned aerospace, so the Federal Aviation Admin, FAA, or pharmaceuticals for FDA, Food and Drug Administration, you know, they require extensive companion efforts to go along with any ML surrogate. So even if the ML model is super accurate by, you know, this moonshot challenge that we’re describing, right, like, let’s just pretend it worked out and they were accurate, it’s still not going to probably be accepted because it needs companion validation that will guarantee safety, right, in these critical applications. So even that is missing and still requiring this companion simulation effort. And let’s also not forget the fundamentals here on the machine learning side. Like you said, this data data is so cherished, right above a fancy model. Well, what’s one of the best things you can imbue in your data? It’s business knowledge. It’s knowledge of the problem. Well, how do you do that in an engineering application, right? It is through simulation and things like that. Like you’d never be an ML engineer at like a financial institute and be like, I have no idea what a loan is interest rate or credit score. So I’m going to build the best machine learning model the world, right? No, you’d never strip that away, they would have to be encapsulated in your features, your future engineering and your data. And same goes in our context, right? So that the simulation in physics and measurement data must be present, you know, so it’s a, it’s a moonshot challenge, right? But we need to remember the criteria we are responsible to provide as the AI folks so that these can be adopted as solutions.
Fatma: That’s a really good point. You always have to have the domain expertise to be able to use machine learning properly, correctly.
Todd: Yeah, I guess I can think of the times where, you know, I’ll use copilot or chat GPT, ask it a question and I’ll immediately say, well, that’s not right. Because it happens. And, you know, and truthfully, I’m, when I use those kind of tools, I tend to use them in domains that I have some knowledge of. And I’m pretty afraid to just take a generic thing from one of those tools right now and apply it anywhere where I don’t have expertise, because I don’t, I don’t want to make a mistake. I don’t want to look foolish. And, and I think in the domains where I have expertise, I see a whole lot of value. I don’t know about the two of you, but I’m using copilot and chat GPT and generative AI. I use it every day. And I see a lot of ways it really helps me. But I don’t know that I’ve ever taken, taken it at its word and just used that and made that the answer. I’ve always helped it augment the kind of work that I’m doing.
Justin: Yeah, I think that’s probably why you’re sensing as a listener so much passion. And we talked about, you know, ML surrogates and making them as generic as possible is because, you know, we, we focus on this for so many hours of our life to, you know, have these things successfully translate into usable tools in industry. And we want to make sure it’s clear, you know, the requirements and what’s real and what’s not, because this sort of hype cycle and buzz language, you know, can can detract from people actually adopting and using these things in the responsible way that we envision.
Spencer: Thank you all for those wonderful insights. But unfortunately, we are out of time for this episode. So once again, I have been your host, Spencer Acain on the AI spectrum podcast. Tune in again next time as we get our three amazing guests back for the final episode of this special series on the intersection of artificial intelligence and simulation.
Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.


