Industrial machinery and AI episode 1-transcript
Chris Pennington: Shirish, could you give us a more in-depth introduction explaining your role here at Siemens?
Shirish More: I’m Senior Product Manager here at Siemens Digital Industry Software where I lead the product strategy and innovation for Design Center, specifically focusing on our SaaS offering. And then when it comes to SaaS offering, you know my focus is on AI driven capabilities in Design Center.
My role involves shaping the future of design and engineering software by working closely with our customers, development teams, multiple industry partners. I help define the road map for them as to how we integrate emerging technologies like AI and cloud computing instead of our Design Center offering to create more intelligent, connected, and efficient workloads for our users, particularly when it comes to industries like industrial, machinery, automotive and aerospace.
Chris Pennington: Shirish, how is AI transforming product development in industrial machinery?
Shirish More: That’s a great question. AI is really transforming product development in industrial machinery, and I would like to break it down into a few key areas.
First, let’s start with how AI is going ahead beyond the traditional automation where AI wants help with things like text or image classification. It’s now becoming more 3D geometry context aware and in Design Center we are embedding AI that can understand, interpret, and even generate engineering content.
This area is where rating of engineering content or understanding the engineering language is going to be very important. And this is where Design Center and especially AI in Design Center is going to really help because it’s going to include intelligent recommendations for design specific features. It can help with design validation, keeping the manufacturing best practices in mind. It can also start promoting design reuse things which once required manual time-consuming input for engineers.
Now this is where AI can come in really handy because it has been trained using the industry best practices, it has been trained using the historical data. With that in mind, AI reaches to a point where it starts guiding the designer and our engineers instead of our Design Center tools making a wise decision, that’s one example.
The second use case, especially keeping the real-world customers in mind is they have reported customers with whom I have started interacting with as to how they are leveraging the AI capabilities that we have released so far. They have reported 10 to 20% efficiency gains by using AI driven workflows. Copilot is a very good example and it’s a trending topic. These days customers, they’re used to Chat GPTs and Deepseek and there are a number of instances wherein they said, well, yeah, it could be nice if Copilot in Design Center can start suggesting things that can simplify the task.
This is exactly where they have started realizing the efficiency gains because the Copilot in Design Center identifies things that down the road can start failing. This is where the power of AI comes in handy because we have trained machine learning models which can be deployed right onto the user’s machine. We have trained the large language models which have been trained using the Siemens IP, for example.
And while the users are using our products, it continuously monitors things and ways in which the users are using the product and start suggesting some modeling or our collection actions, suggesting things that down the road might fail. This is exactly where AI really comes in handy. These are some of the examples to your question as to how I see AI helping our industrial designers or industrial machinery users or designers who are designing industrial machinery equipment.
Rahul Garg: Yes, Shirish, great examples of where customers are beginning to see this. You know, one of the things I feel is that the whole adoption of AI is not something that just started a few months ago. I think we have been working on some of these areas for some time . And that’s why I was just wondering if you could perhaps share some thoughts on the evolution of how you know AI has been coming into our design products specifically to enable the higher efficiencies you were just talking about.
Shirish More: Yeah, absolutely, Rahul. And that’s a very good question. It’s not that we just started with Copilot. You know, we were the first product in the industry who introduced adaptive UI. And when I say adaptive UI, this idea about monitoring how users are using our product. Once we start learning from user’s action, especially in context to a task in hand, we reach to a point where we can start predicting things.
This is where under this umbrella of adaptive UI in 2019 we’ve introduced command prediction wherein NX or Design Center learns from users’ behavior and proactively surfaces the most likely next command. This is ideal for machine builders working on structured assemblies or repetitive design tasks, because now we reach to a point wherein, we detect a pattern as to which data set the user might have loaded, what kind of task he’s going to perform and that triggers the AI model on the user’s machine, which can detect a common behavior and proactively starts suggesting commands, which basically saves time in turn and reduces the overall learning curve for newer engineers.
Rahul, you can see how way back in 2019, we started introducing this concept of adaptive UI, which is just speeding up the things, especially keeping the machine builders or the users who are day in, day out trying to design machines.
Rahul Garg: Good. And I would imagine this not only helps them in becoming more efficient with the capabilities of the tool, it probably enables them to start using new capabilities that they may not even be aware of. And as we introduce new capabilities in the products, the product itself is telling you that hey, to do this next process, maybe use this tool. It will be a lot faster and more efficient.
Shirish More: That’s exactly right. It means we are all used to smartphones, we are all used to marketplace, and we always think, all right, how is this particular application predicting what I might purchase or predicting things that I might add to my shopping cart. Very similar to that approach, what we’ve been doing is that we profile a given user based on the commands the user is using, based on the data set that he might have opened, for example, whether it’s a piece part and if it’s an injection molded part, we start recommending commands that might help him with molded part related options or add-ons that he might be familiar with or he might not be even familiar that, well, he can apply draft angles and other things. That’s just one example.
The second example is, for example, we can detect a pattern wherein the user is working against a large assembly and whether it’s selecting a series of similarly faces or component NX can now anticipate when the user intends to select next component and perform mass related operations. this is where we automatically detect a pattern. We start guiding the user in selecting similar component or edges and then apply the operations.
We have reached to a point wherein by using the machine learning models, by training them against different types of datasets, features and whatnot, we have reached to a point where we start suggesting things to the user, predicting things and under this umbrella of predictive UI. Users have started getting productivity gains and efficiencies for a number of operations within NX, and depending upon whether he’s working against a piece part, drawing, PMI, or large assemblies, we can detect that pattern and start guiding him in in making sure that his overall workflows are efficient going forward with AI.
Rahul Garg: And I think that those are all good examples of how the evolution has started right for quite a few years now and it’s an area that we have we have continued to invest in, and we are we are which is the reason why we are continuing to see a lot of improvements and efficiency gains from our customers Do you have a sense of how many customers are using command prediction now or are the is?
Shirish More: When it comes to the percentage, I think 75 to 80% of our customers who have enabled command prediction or selection predictions or select similar faces, I think 80 to 85% of our customers are using it and not only that our the most used commands and the efficiency gains that they have realized is awesome because just looking at for example, I know that if I have a customer who is working against a turbine blade or a motor blade and he’s trying to create chamfer or blends on the sharp corners. Traditionally that would have taken like 20 or 40 clicks. Zoom in, zoom out. Now with selection prediction, command prediction in less than two clicks, they’re able to apply chamfer or blend or profile sharp edges.
Here’s an example, a real-world example. Traditionally some of these commands would have taken like 20 plus clicks and in addition to clicks, zoom in, zoom out and now with command prediction, selection predictions, selecting similar faces and ages, it has just drastically brought it down to two clicks and this is the power of AI.
Rahul Garg: That’s fantastic. That’s really amazing and that’s something that we and having 80-85% of our users already starting to use it. I would imagine as the capabilities of AI keep maturing and our abilities keep increasing, the customers will be quite ready to start adopting them.\
Chris Pennington: What capabilities are coming into Design Center and how are customers adopting these capabilities?
Shirish More: AI has really elevated the capabilities of Design Center and we are seeing this happen on two main fronts. The first one is generative AI and with the introduction of Design Center Copilot.
Or an X Copilot starting with 2012, which is nothing, but our AI powered assistant built into the environment itself. When I talk with customers and users, they’re used to having a Copilot that’s outside of the environment. ChatGPT for example or DeepSeek, they can have it inside of the standalone browser and there is a real fundamental disconnect between those browser-based Chat GPT like tools and NX because those tools are not context aware. Meaning that they have no idea as to in which context the user is asking the question, or the user needs the help.
This is where the power of Design Center Copilot or our large language-based assistant comes in handy because it allows the engineers and designers to interact with Design Center using natural language, whether they are searching for a command, whether they are asking design related questions or getting help with documentation right and going back to what was saying regardless of the skill levels. You can always start having an interaction with Copilot and not only just that, Copilot that’s trained using the technical documentation.
We are taking it further with generative AI basically enabling the Copilot to assist with tasks like generating new engineering content or validating the design based on best practices and even automating some day in, day out task. Now that we have trained the Copilot using our NX open APIs for example, that gives us a power as to wherein the designer can say, well, I day in, day out create datum accesses and datum plane before I start working on a new design. And he can just ask the Copilot to generate a small script and that he can execute that script.
But you can see how we are now heading towards a generative AI wherein not only the Copilot in Design Center is going to generate engineering content but depending upon the different modalities that we have used to train it, it can really speed up the overall workflows in NX or inside of Design Center and that’s where we are really. Now, both with the Copilot framework as well as the agentic framework, we are now taking Design Center to a level wherein, regardless of which role you are playing within an organization, as long as you know what type of question or what type of assistance you need, Copilot is there as your teammate trying to help you out day in, day out with design workflows.
Rahul Garg: Yeah, well, I think that’s some pretty fascinating things coming down the road. You know, one of the things I always wondered is, is as these new capabilities are coming in: first of all, how can someone start beginning to learn to use these capabilities? And the different levels of skills that typically that exist in an organization, how can they use these different capabilities? Any thoughts on that Suresh?
Shirish More: Yeah, that’s a very good question, Rahul, because when I meet with our customers either during conferences or when they ask me to conduct a workshop wherein they invite different types of user to understand how AI is evolving and our customers who might have been already on this journey for a while.
As I explained, there are users, very smart users who might have already started leveraging ChatGPT. But outside of Power PLM tools and for such customers when such a topic or they kick off this discussion, I normally like to phase it out, meaning that I say, well, let’s start with crawl, walk, run and fly phases because yes, in the end every customer really would like to jump onto an agentic framework or Copilot which can basically understand the user’s prompt and suggest new designs or new innovative ideas. But yes, we are heading in that direction.
But before you can reach to that point, you need to first start rolling out AI adoption plan and what I mean by that is take for example crawl phase wherein now that customers see where Siemens as an organization is going with our AI strategy. They need to start adopting AI capabilities and I normally suggest them to begin AI adoption by using AI to reduce repetitive actions, they can use AI to navigate the UI.
For example, if you’re going to onboard a new user, start using things like we just discussed, predictive UI or command predictions. Focus on increasing the individual productivity and easing onboarding of new users. That basically will boost productivity by streamlining design workflows. It’s going to reduce manual effort by automating repetitive tasks. And once the users start gaining confidence, once they start trusting AI, now they can start with the walk phase which is using or introducing some advanced capabilities like AI enabled performance predictor which is monitoring your product characteristics and AI is basically suggesting load conditions, suggesting topology optimization kind of workflow and other things right.
And that’s the walk phase wherein now you say all right my users are gaining confidence. They know how to use AI inside of our Design Center tools, leveraging AI for speeding up the overall design processes. And now with that in mind, they can say, all right, I’m ready to jump on to my run phase wherein I’m going to start using my best practices and my technical knowledge to train the machine learning model as well as the large language model.
And once they start integrating AI as part of their training processes, meaning that leveraging best practices, requirements, specification, historical data, now they are ready for the final phase which is fly wherein use of Copilot and agents and foundational models to unlock generative workflows and reusing historical IP to start generating or creating engineering content.
Rahul, you can clearly see how we just broke down a plan into four phases, crawl, walk, run, and fly. But you need to start early in the cycle so that you know, maybe two years from now, you are ready to fly with some advanced use cases leveraging AI.
Rahul Garg: Yeah, I think this is very, very helpful and the way you broke it down into these four well defined phases of how you can bring adoption of AI into your organization in a very organized and structured way so that you can really get the full advantage of what the capabilities are. There are a couple of things you mentioned though. I want to unpack a little bit more when you spoke about bringing in your company’s IP, bringing in your company’s knowledge.
And using that to do some generative design as well, I would imagine that’s probably going to require some pretty specialized skills as well, not only by the designers, but even on the back end from an infrastructure perspective. perhaps maybe you can talk a little bit about what are some of the building blocks a company needs to be thinking about before they kind of even embark on this journey right off this crawl, walk, run, fly. And then in that fly phase and then the run and fly phase when they are they are bringing in the knowledge of the company, how does that come about?
Shirish More: Yeah. And that’s again a very good question because when we discussed about crawl, walk, run, and fly phases, you know, we accounted for things like customer IP and the approach that we are taking . Siemens is taking a very intentional and responsible approach to helping in this case for example, the machine builders integrate more AI driven processes, especially looking at tools like Design Center and Team Center. One of the biggest enablers is how we use the enormous domain of IP and customer data to train and fine tune the AI models going forward.
We’re not just adopting the general purpose AIS, we are building domain specific intelligence which can understand the engineering context, which can understand the engineering language, it can understand the manufacturing constraints and mechanical design principle which includes training on Siemens owned documentation, validation rules which we release with our products where permitted we are going to use this.
But beyond this data, beyond the models that we provide, Siemens is also investing in educating and guiding our customers on how to adopt AI responsibly. Making sure that they identify the right use case, for example like automating repetitive tasks or improving the quality of the product by looking up their best practices. And when I discuss these scenarios with the customer, I realize the fact that majority of our customers will have best practices and historical data which are proven data.
The data resides in their team center repository. How can we help them identify the right use case based on their gold source data and how can we provide them with tools which can learn from their data and which we call customer IP. And then we learn from their data. The way we have architected the solution is that the data extractors can learn from their data. We can train a model which stays within their premises. We are not that oh, we are learning from your data and now we are going to share with everybody now. We are going to learn from that data. We are going to provide the tools for our customers which can extract this valuable knowledge, historical data from all these years and these models can be then deployed within premises tested against their data.
And now you can see that once we roll out the Copilot architecture for example and from the user perspective when he needs assistance these Copilot and agents are looking up data sources which can be within customer premises as well as very specific to our applications and then they give a seamless answer back to the user as to how now based on the task in hand or a change request in hand or a work order in hand, how can the user can achieve that task in a much efficient manner.
Behind the scenes, we are just looking up at multiple data sources and depending upon the prompt or the help the user needs, we give the user the right response back. You can see how going forward is not just giving customers an AI tool, it’s about helping them productionize is in a way that fits their product lifecycle and business priorities and processes going forward.
Rahul Garg: Those are some great thoughts there, Suresh. You know, one of the simple examples I can think of , for example, material properties or designing for manufacturing needs, trying to do a lot of that analysis right up front during the design process itself. Those are typically a lot of capabilities that are in some cases maybe industrial insights that we can provide like material properties and information, but the design for manufacturability is maybe perhaps a company process right on and how they would want to manufacture something, how they would want to assemble something.
So, bringing both those things together would make it a lot easier for the design engineer to make sure that he’s accounted for all those requirements right up front in the design process itself.
Shirish More: Exactly, exactly. And this is where you know I’ll introduce a term called industrial foundational model . And it’s the model that we’ll develop but it will have the ability to also get trained on customers IP and this model is purpose built to understand the industry language, understand the language of engineering and manufacturing and unlike the general purpose models that are out there, these models that we will provide our customers will be trained on structured and unstructured industrial data such as CAD, geometry, simulation files and to your point, validation data that they might have from their labs or some other places.
And once these models are trained, these models are then context aware, meaning that they can understand the engineering intent, recognize the geometry in context to which the user is asking help, interpret the process data, for example, resolve the ambiguity and then start helping the users tackle the real-world industrial scenarios. That’s the plan and absolutely Rahul, you got it as to where we’re not just any AI model provider, we are here to help our customers leverage AI fully to speed up their overall product design processes.


