Thought Leadership

AI Spectrum – Simplifying Simulation with AI Technology Part 1 – Transcript

Recently, I had the opportunity to talk with Dr. Justin Hodges about the innovative work he and his team are doing over in Simcenter in regards to AI, especially in the areas of simulation automation and knowledge capture. You can listen to the first part of our talk here or keep reading for the full transcript of part 1.

Spencer Acain: Hello and welcome to the AI Spectrum podcast. I’m your host Spencer Acain. In this series, we talk to experts all across Siemens about a wide range of AI topics and how they’re applied to different technologies. Today, I am joined by Dr. Justin Hodges — an AI/ML Technical Specialist and Product Manager for Simcenter. Justin’s PhD is in Machine Learning Usage in Turbo Machinery from the University of Central Florida. Welcome, Justin.

Justin Hodges: Hi, it’s nice to talk today.

Spencer Acain: Before we get started, can you tell us a little bit about your background and your position at Siemens?

Justin Hodges: My background was in aerodynamics and thermodynamics and heat transfer, and that sort of traditional track within mechanical engineering at University of Central Florida. 2016, I was exposed to what happens in the CFD-CAE space once you start augmenting what you’re doing with machine learning. I fell in love with it and had my first professional experience with it in 2017. And then, basically, every moment since professionally and as an intellectual interest extracurricular reading and playing around and things like that has been to try and further my understanding of AI and how it can have us getting more accurate answers and better times and stuff in CAE space. So, recently, I guess, a couple of years ago, I started formally doing that as my job within our Simcenter landscape at Siemens in Digital Industries. What that looks like today is basically we have a matrix of products in our Simcenter portfolio: STAR-CCM+ and all kinds of software that you may have heard about for different purposes. And essentially, AI and ML plays a unique role in each of those different software for different personas and different tasks and different design disciplines. So, my role in product management for that grid of products is to essentially look at where we have the most opportunity to introduce AI and ML for our users, whether that’s dedicated tools, whether that’s specific features at the surface in the GUI that the user can be exposed to that’s AI/ML-powered, or whether that’s things behind the scenes that is acting maybe unnoticed to try and help the user have better answers faster — that sort of thing.

Spencer Acain: Well, that sounds great. So, you mentioned using AI in a lot of different ways, both exposing it to the user and having it behind the scenes. Can you go into a little bit more depth on that? Can you tell us a little bit about how you’re using AI/ML in your work today and maybe in the future?

Justin Hodges: Siemens overall is a long-term investor in AI technology. I don’t remember the specific numbers, but we’re always pretty much a global leader in terms of patents and innovation in that criteria, as well as for AI and ML — so, this is no new thing whatsoever. But if you dial it down to this Simcenter space with our portfolio of products for simulation and test, then you can start to bracket into different categories. We have some groups that work on innovation and future technologies; things that are emerging as great candidates for higher returns of our simulation users but have yet to be vetted and proven. So, we have groups looking into those research projects and topics and investigating them. But then on the other side of the spectrum, we really have specific products and specific products features that are already being rolled out, that are AI/ML-heavy. And it becomes challenging to try and label those in as few bins as possible. There’s really a few macro use cases you could at least start conversation with that are really popular in terms of our industrial users. You could say surrogate modeling is one big area that AI and ML plays a role, there’s solver innovation, which would be some of those behind-the-scenes things, and there’s a number of specific examples that fall into the macro use case of user experience and workflow intelligence. From there, you could branch out and name more like automated tasks and things like that; engineering tasks that you want to automate, which would otherwise be like a lengthy process or something with a lot of clicks. I think that’s a reasonable categorization of a few.

Spencer Acain: So, it sounds like you’re using AI/ML in a whole wide range of different applications. And it would be great if we could just drill down and start looking at some of those applications and maybe just see a little bit more about those in detail. So, to get things started here, I’d like to ask you about how you and your team are using AI to improve the user experience.

Justin Hodges: That’s a good question. There seems to be certain themes in industries that are big waves of pushing an effort and stuff like that. Definitely, one as of late is this idea of user experience and that sort of thing. So, it makes sense because a big tenant of our portfolio is modeling complex physics faster. So, naturally as we want to model more and more complexity, some of the burden could be put on the user to actually go through a lot of steps and it’d be painful. And then more so, simulation costs time and money for licensing, but also the time for the engineers to do this. So, it’s a huge, important tenet, and we’re taking it as such a priority in terms of how we’re looking at things, so there’s a bunch of examples. One recent one that our engineering services team for STAR-CCM+ is capable with and selling today and already helping companies with is in terms of part classification. So, the focus there is on the automotive industry because when people are doing their simulations on these cars, there’s the inside cabin, there’s a vast amount of parts on the under hood, there’s the external aerodynamics — it’s like thousands of parts inside a typical sort of assembly in simulation that are there maybe 10,000 and you want to reduce it, get rid of junk parts, parts you don’t need to model. So, the prototype or the capability is to have a machine learning model that can do this part for the user and the workflow of going by each part individually, naming it, and classifying what exactly it is: is it a bolt? Is it a nut? Is it a long rod? Is it a bearing? With other automation inside our portfolio — in that case, STAR-CCM+ — it can automatically assign material properties, boundary conditions, naming conventions, assign it to the right region, that sort of thing. So, it’s taking something that would normally take — depending on case by case, but nominally — five to 10 full working days, and it can reduce it to something that can happen in a couple hours, in terms of running through and doing the automatic classification. So, that’s one really powerful example, really fits with the theme of modeling more and more complex cars and also do it faster.

Spencer Acain: So, it sounds like you’re using this AI to pare down the amount of work needed for classification of different parts and different components in a car. Is there a reason you’d want to be classifying those components to begin with? Would they not be already classified?

Justin Hodges: The starting point is really CAD, maybe it comes from NX and that sort of thing. But when you’re doing your CFD simulations or if you’re doing mechanical analysis, it really depends but essentially, it’s going to have to be adapted every time — like in one software, you would assign material properties in a certain way or you would want to refine the mesh on certain components to be more fine. If you’re doing aerodynamics and you’re looking at the fender in one particular part of the domain where it has a very sharp curvature, and you know that you’ll have separation there as a key concern to the performance on the overall drag; as a result, you have to label that part to be mashed and refined with some mesh settings that you wouldn’t want to apply everywhere else because then it’d be too costly to have such local refinement. So, in the grand scheme of things, when you’re working with that number of parts, it’ll always be of interest to sign maybe four or five specifications for every part. And again, if you have upwards of 10,000 parts, that’s why it takes such a large amount of effort to do this manually. So, it’s really great, you can have a tool that can do this classification in fractions of a second per part.

Spencer Acain: Yeah, it does sound really helpful, huge time saver there. It sounds like you’re using artificial intelligence in a lot of different ways to help with part classification, and then in simulation as well. And I would really love to hear more about how you’re using AI in simulation and testing beyond just what you’ve mentioned already.

Justin Hodges: So, the next one I’d probably touch on would be surrogate modeling. It’s simple in concept, but it has pretty high value-add to the user. Essentially, there’s a finite amount of time in a design cycle, and you want to run, oftentimes, far too many simulations that are possible to find the best design and things of that nature, given the time constraint. So, a common thing to go to is what if I create some sort of surrogate to simulation in the form of a Reduced Order Model or Response Surface Model or Machine Learning Based Model? Kind of synonyms there but, what if I could create those that are more or less real-time predictive for inference? And then that way, instead of running another 10-hour simulation; I can, after enough simulations, have an accurate ROM, and then just inquire and infer what the results are for some new design point. This could be really a few different ways that it goes down. So, one would be design exploration. I want to explore space. So, you can alternate between simulation and ROM-based population of those data points that you’re really interested in. Or it could be done as well in optimization. Let’s say you have a bank of simulations and you train a machine learning model, and then you want to do few different optimization cycles, maybe each time the boundary conditions or operating conditions are different. So, you want to have like an idea on the optimal designs and things like that for each case. So, rather than having to rerun all the simulations, as long as you’ve had some to date to form a modest-sized database, then you can train a surrogate to do it for you. And it just keeps compounding because you have to do sensitivity analysis, CAD sensitivity analysis, robust and reliability studies to make sure that your good designs really don’t break down at infrequent operating conditions and things like that. So, there, it’s really common that you would try to fill a space in a sampling technique that would give you thousands of cases to run, which is just not unfeasible. So, yeah, AI and ML really plays a role here. And we have some great features inside, HEEDS and other places, for exploring this design space with lean sampling approaches, adaptive sampling, things like that. So, it couples really nicely. In the end, you save a lot of time and resource to run all the cases.

Spencer Acain: Yeah, it does sound like it’d be a huge benefit, similar to how you’re cutting down the part classification from weeks to hours. Is it a similar kind of benefit here in simulation when you go move to AI-powered models instead of running full simulations every time?

Justin Hodges: Yeah, there are similar time savings, absolutely. Typical approaches would be random sampling or uniform sampling. But this is really much more lean and efficient if you start from beginning with these adaptive approaches and you build surrogates, and then you can do spin-off studies at basically no cost, just purely based on surrogates and ROMs. It’s really a compounding effect in terms of how much knowledge you can capture in terms of your design space exploration from just a core group of simulations.

Spencer Acain: And then would you be able to recycle these models, so to speak, into future projects and continue the trend of capturing that data and shortening timeframes?

Justin Hodges: That’s a good point. Yeah, that really is another area that a lot of companies are in whether it’s gasoline engines moving towards electrification, or whether it’s just shorter manufacturing and design cycles. There’s a lot of pressure in the industry on these companies to output things faster. And part of the key inhibitor and taking a three year design-to-product timeframe to half of that or something insane, one key inhibitor is the serial type of nature, I’ll do a few different fidelity simulations in serial — like chaining together and different steps and tools — maybe I’ll also have to do cross-discipline type design as well after the aero group does their work, then the heat transfer group does their work, etc. What happens if when you get to the end of the cycle? You realize that one of the groups is not okay with the design, you somewhat have to start the process over, which is very dangerous to delivering on time. If you’re in a several-year project and you’re towards the end, starting all over can be catastrophic. So, one really big thing is transfer learning, where I’ve done some problem, maybe “engine type one”, and then the next year, I do a similar looking engine or similar looking design, but it is unique — well, at least, in the early stages of the design, I can predict, at later stages, what my results will be based on machine learning based models. Whereas before, I might not have such a good idea because I have to wait and do the simulations later on at the later stages to see what happens. So, having that knowledge, at least in an inference or hypothesis-type way is very valuable early in the process that can prevent these big catastrophic things for the schedule where unforeseen design results come up in things like that.

Spencer Acain: I would imagine having to go back to the drawing board at the end after you’ve completed everything would be a big issue for a lot of companies or designers, in general, really.

Spencer Acain: But that is all the time we have for this episode. Once again, I’d like to thank Dr. Justin Hodges for joining me today on the AI Spectrum podcast. I’ve been your host, Spencer Acain, thank you all for joining me and be sure to tune in for the next episode where Justin and I will continue our chat.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Spencer Acain

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2022/09/27/ai-spectrum-simplifying-simulation-with-ai-technology-part-1-transcript/