Thought Leadership

How AI is accelerating design space exploration transcript – Part 3

By Spencer Acain

In a recent podcast, Dr. Gabriel Amine-Eddine, Technical Product Manager for the HEEDS Design Exploration Team, continued a discussion on how AI is changing design space exploration, the power of complexity, and the necessity of uncertainty awareness. Check out a transcript of that talk here, and listen along here or at the link below.

Spencer Acain:  Hello and welcome to the AI Spectrum podcast. I’m your host, Spencer Acain. In this series, we explore a wide range of AI topics from all across Siemens and how they’re applied to different technologies. Today, I am once again joined by Gabriel Amine-Eddine, technical product manager for HEEDS Design Exploration Team. In the previous parts, we discussed the ways that HEEDS is implementing artificial intelligence to accelerate the design space exploration process, as well as the ways that the design of the AI system itself leads to building trust and confidence in the result by understanding the limitations and the potential errors in its own predictions and inferences. But now I’d like to change tacks and loop back around to something you talked about in a previous part. You mentioned the idea of holding some parameters fixed in the model or basically keeping your digital twin as close to the real world as possible.

But would there be a benefit to holding some of the parameters fixed during this optimization process? So basically simplifying the model a little bit to reduce the computational complexity if you know there’s certain areas of your design exploration of your certain variables that don’t have a strong effect on your final result, you could just assign them a value and then move on. Is there a benefit to that?

Gabriel Amine-Eddine: Really good question. That is a very good question. So I was actually asked something similar at an optimization conference because one of the methods that we have in HEEDS post is an influence analysis, and that’s been around for quite some time. And in an influence analysis for traditional methods, the traditional method is you’ve got your digital twin, you’ve got 50 parameters, and there’s concern about the curse of dimensionality. Oh my God, I’m going to have to run so many computationally expensive simulations. So typically the engineer would follow the approach of what are the influential parameters, the variables that affect my responses of interest. So they do design experiments relatively locally around their initial concept.

And from that, they can determine, is there a significant change in the outputs that I’m interested in? If there is, great, this parameter can be seen as influential and we would keep it, but if not, then we can say maybe we don’t need the parameter, we can just keep it as constant. Now, the downside of that is that fundamentally the influence analysis performed by whatever DOE method is done at that time tends to be a local assumption. And when you are doing design space exploration, fundamentally you are trying to explore in a global manner across the design space. So if you were to limit the variables or make some variables a constant, you would actually limit the potential to finding that novel drastically improved design. So it’s a strange way as to what is the consequence of doing that action. With HEEDS, we say instead of doing it at the start, before you do an optimization study, keep all of your variables. Keep all of the variables as you want.

You’ve got plenty of opportunity. In fact, the more variables you have or more premise that you have or are available to change in your product or system, there’s a lot more diversity possible. And our Sherpa methodology is very well-suited to that and it can explore that design space very rapidly. And the cool thing is that we can use some of these similar techniques for identifying influential variables. Instead of doing it before the optimization, we do it after the optimization because at the end of an optimization study, we’ve got a huge amount of data that is all focused on high-performing designs. So if we do now that parameter influence effect, we’ve got methods to actually do that. We can actually identify not just what are the variables that influence the response. We can actually say, what are the variables that lead to the performance of my responses?

And once we identify those variables, we can say, “Right, this part of our product here, we can improve this and we can actually improve this by adding more technology. We can focus our R&D efforts on that technology.” And that’s sort of like the way we treat the approach for parameter reduction. The downside is that if we wanted to then do another optimization study, we could also say, “Why not still do a parameter reduction at the end of an optimization? Because we now know what are the most influential variables that affect the performance of our product.” So let’s keep those variables constant, extend our optimization study. We are then faced with the challenge that if we keep some parameters constant, they might not necessarily be the right subset of parameters that are responsible for the reduction in mass because there might also be another subset of parameters that are responsible for a change in the strength of the product in a different area.

So you have to, there’s nothing wrong with doing it, but you could do that endlessly until you’ve only got one variable that you’re optimizing. You can do it once at the end of an optimization study, but I would say use that information to put more technology and add more parameters to your optimization problem. And we can also do other things as well. There’s different pathways as to what you do with the information. Adding more variables, adding more technology in targeted areas is one thing. But you can also identify what are the parameters that if you were to apply perturbations, if you were trying to model things such as environmental effects or manufacturing tolerances, you would know specifically what parameters you would need to apply an input perturbation for.

And that will give you awareness also on the robustness of your product and the reliability of the product when it’s faced with those uncertainties in the real world. Because computation, if you imagine gears in a car engine, it’s perfect, perfect geometry, there’s no defects. But when you start to cut metal and form the actual gears, you’re going to have these tiny imperfections which can cumulatively add up and affect the lifetime of the product. In Siemens energy case, it’s pretty important, especially when gas turbines are running at high RPM, you can think about the fact that every blade that you see on an aerospace engine is being positioned there and it’s being positioned at its location circumferentially by design, they actually hit the blade to see what’s its vibration. Because if you have too much vibration that is unable to cancel itself out, you’re going to have increased wear on bearings, you’re going to have decreased in product lifetime, could be subjected to increased mechanical thermo fatigue, all those kinds of considerations. So designs that are optimized without consideration of uncertainty could lead to non-robust products.

Spencer Acain:  It sounds like it’s a real trade-off if you want it to be setting these variables constant. And in your methodology it makes more sense to just let things be variable and take the benefit of that variability since you could potentially setting something constant could, that doesn’t seem important in one segment of the design could actually be important somewhere else.

Gabriel Amine-Eddine: Correct.

Spencer Acain:  I see.

Gabriel Amine-Eddine: We want to exploit every possible grain of knowledge that we have, not limit the potential for more knowledge whilst doing so.

Spencer Acain:  I think that’s a wonderful way to put it. But to continue on, is there… You’ve mentioned, well, obviously a lot, in the context of this, there’s a lot of AI models in this tool and can you reuse these models? You said obviously they have to be trained with this simulation knowledge and data ahead of time before they can be implemented into an actual design workflow. But can you then, once you’ve gone through the effort of training them, can you continue to use them for different similar projects?

Gabriel Amine-Eddine: Absolutely, yes. There’s knowledge that is captured within these models. So there’s a concept known as database transfer learning. And if you say you’ve got five product families in the past, you’ve got data that is sitting in archive. And it’s a problem that many customers are faced with. Huge amounts of CAE simulation data, 10 plus years of knowledge, what to do with that all. And they can effectively train these AI models on that data and be able to apply those same AI models to new concepts that they want to develop. And that’s really important because you’ve got knowledge that is retained from past. And as the concepts evolve, as there’s new requirements that come into play, these models can, they are ready for making predictions from Design One effectively. They don’t have to wait to actually gain the knowledge.

They’ve already got some prior knowledge and that’s what makes them pretty beneficial because they can start making predictions. We can use them in HEEDS to decide when and whereabouts we use a prediction versus a simulation. And as the new simulations come in, we can also start training with the new simulation data so we can actually adjust and adapt the new design space as we’re exploring. We’re boosting even the usage of these models.

Spencer Acain:  That sounds like an amazing way to capture and reutilize a lot of this past design knowledge and not let it go to waste basically. Because in I don’t think existing methods, you can’t really leverage all of that to its fullest potential. And now you can just feed it into the model and then all of a sudden it becomes just an integral part of the design process itself. You’re really standing on the shoulders of giants, so to speak. You’re capturing all of that past design information, everything you’ve done in the past, instead of letting it sit, it’s all there. So do you see this, the way AI is integrating into the design process and capturing this knowledge as, do you see this changing the way that products are designed in the future or that people will be using design tools in the future?

Gabriel Amine-Eddine: Yeah, I see engineers using all possible tool sets to accelerate their processes. What they’re doing on a day-to-day basis. Having the toolbox of AI alongside any efforts that they’re doing is pretty valuable. We’ve seen a lot of how assistants have been used. We see a lot of models being used, obviously to share knowledge between different teams or to reduce the complexities involved with IP protection as well. So I definitely think that there is a movement in that field. There’s a lot of demand as to how to benefit from the technology that AI as a whole field is actually offering these days.

Spencer Acain:  Yeah, I agree with that. The idea that it’s just one more tool and that almost like AI can layer on top of existing technology and just be a more effective way, a better way, a stronger way to use what we already do and to just enhance that further.

Gabriel Amine-Eddine: Yep. We see it in almost everything that we do on a day-to-day basis. Mobile phones, if we walk into a building sometimes, we’re being greeted by some assistant.

Spencer Acain:  So is there anything else interesting you’d like to share before we wrap this up? Either about AI, your work, or just the broader world of artificial intelligence in general?

Gabriel Amine-Eddine: The thing is that we’ve got a lot of plans for AI, for HEEDS, and also for our customers to actually utilize and benefit. And we’ve got a very good team of people that are involved. Do want to give special mention to architects from the Siemens Technology Division, DAI, Stefan. They’ve been instrumental in actually bringing this technology to light and actually putting a lot of the thought process, a lot of brainstorming, and also from the HEEDS development team. We had a good team of people actually integrate this within the product and bring it to the rest of our customer base.

Spencer Acain:  Well, fantastic. I think that is a great place for us to end things here. So thank you, Gabriel, for joining me. It’s been a pleasure.

Gabriel Amine-Eddine: Thank you.

Spencer Acain:  Once again, I have been your host, Spencer Acain on the AI Spectrum podcast where we continue to talk about all things AI. Tune in next time to continue to learn more about the exciting world of artificial intelligence.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2024/05/02/how-ai-is-accelerating-design-space-exploration-transcript-part-3/