Podcasts

How ChatGPT Is Redefining the Future of Engineering Simulation with Kai Liu

Listen to the Engineer Innovation podcast on

On today’s episode, we’re joined by Kai Liu, Senior Key Expert of Simulation and Modeling at Siemens. He shares insights into the integration and impact of ChatGPT within Siemens and Simcenter tools, discussing the future of engineering with the advent of advanced AI technologies.

Key Takeaways:

  • Discussing ChatGPT’s assistance in programming and writing.
  • Integration of ChatGPT in Siemens and Simcenter.
  • How ChatGPT is changing engineering tool interfaces.
  • Integration of ChatGPT in the ‘Hi Simcenter’ project.
  • Strategies to reduce randomness in large language model responses.
  • Addressing AI drift and confidentiality concerns in professional LLM use.
  • The rapid evolution of ChatGPT and challenges in its professional integration.
  • Future predictions for domain-specific LLMs.
  • A discussion of AI’s impact on engineering jobs.
  • AI’s role in enhancing engineering productivity.
  • Speculation on how engineers might use LLMs in the future

Resources Mentioned:

Hi Simcenter

ChatGPT by OpenAI

This episode of the Engineer Innovation podcast is brought to you by Siemens Digital Industries Software — bringing electronics, engineering and manufacturing together to build a better digital future.

If you enjoyed this episode, please leave a 5-star review to help get the word out about the show.

For more unique insights on all kinds of cutting-edge topics, tune in to siemens.com/simcenter-podcast.

  • Discussing ChatGPT’s assistance in programming and writing.

  • Integration of ChatGPT in Siemens and Simcenter.

  • How ChatGPT is changing engineering tool interfaces.

  • Integration of ChatGPT in the ‘Hi Simcenter’ project.

  • Strategies to reduce randomness in large language model responses.

  • Addressing AI drift and confidentiality concerns in professional LLM use.

  • The rapid evolution of ChatGPT and challenges in its professional integration.

  • Future predictions for domain-specific LLMs.

  • A discussion of AI’s impact on engineering jobs.

  • AI’s role in enhancing engineering productivity.

  • Speculation on how engineers might use LLMs in the future

Kai Liu:

I think this is a really good question whether the AI will replace us. Because you have over 30 years of experience in engineering, and I have about 20 years of engineering experience. But I don’t think a AI will replace us. And we will still need a lot of foundation knowledge to understand what AI will produce for us.

Stephen:

Hello, and welcome to a special edition of the Engineer Innovation Podcast, in which we celebrate the first anniversary of the public release of ChatGPT. If you wound the clock back one year to November the 30th 2022, I think you’ll find that back then that most people regarded artificial intelligence as being a bit of an abstract concept. The kind of thing that was only accessible to people with lots of expertise.

The public release of ChatGPT 3.5 I think changed that forever, giving everyone the ability to start using artificial intelligence, using their own language from the comfort of a web browser. And what seemed like a bit of a toy in the beginning eventually started to embed itself in people’s lives, increasing their productivity and creativity, sometimes by orders of magnitude. I’ll confess to being a bit of a fanboy, and using ChatGPT almost every day, I think.

But one thing I can’t do just yet is ask ChatGPT to set me up a CFD simulation, or a system simulation, or a structural simulation. But that might be about to change. For this special edition of the Engineer Innovation Podcast, I talked to Kai Liu, who is the engineer who is leading the efforts to include LLM technology in SimCenter. Enjoy your listen.

In November the 30th in 2022, we saw the first version of ChatGPT being released to the public. And I don’t think anybody appreciated back then just quite what an impact was going to have on the whole world, and especially I think, in our case, in the world of engineering. I’m joined today by Kai. Good morning, Kai. How are you?

Kai Liu:

Hi, good morning. Yeah, I’m fine, thank you. Yes, Stephen. Yeah, thank you.

Stephen:

Kai, can you cast your mind back to a year ago. Had you heard of ChatGPT or large language models this time last year or not?

Kai Liu:

Frankly speaking, I didn’t, one year back. I was really impressed. I heard on the ChatGPT, first time, in November last year. And I was really impressed with the capability of ChatGPT. I even didn’t know that there was a ChatGPT.

Stephen:

I think that’s everybody was in the same way or most people were in the same way. And I think for most people before November last year, I think artificial intelligence and machine learning was a bit abstract and mainly things that other people, data scientists used and it wasn’t available to the general public. And then, November last year, ChatGPT becomes available. And it’s like a sandbox for everybody, isn’t it? I think most people would’ve gone to ChatGPT, asked it some questions, maybe not got responses they were expecting. But I think for the first time it was like it’s the first time that AI technology was generally available to everybody. Have you managed to use ChatGPT in your personal life? Do you use it a lot at all?

Kai Liu:

Yes. I used ChatGPT almost every day, so I used it for my programming tasks. I also use ChatGPT for the project management for use ChatGPT to write emails. I really impressed by the ability of ChatGPT. And the result is quite really amazing. And I think each time if I try to write the email to some other colleagues, I just let ChatGPT to correct my grammar and it can do much, much better than what I can do myself. It’s really amazing.

Stephen:

And I’ve had a similar experience to the extent that if they switched off ChatGPT tomorrow, I’d have to do quite a lot of learning to remember how to do lots of things. In my case I’ve used it, as you have, for lots of programming because I’ve been in engineering for 30 years. I’m more of a Fortran 77 guy.

And for me, trying to programme in Python is quite difficult or has been quite difficult. And it’s been a bit of an obstacle to some of the projects I’ve been working on. But now I just ask ChatGPT to write most of my programmes for me and we do it in a conversational way. I explain what I need ChatGPT to do. Tell it not to write any code to start with. And then we go through together. It writes some code, it makes some mistakes and we fix the code together.

And at the end of it, I’m achieving things with programming that I could never have done in my entire life before and doing it about 10 times quicker than I ever could have before. For me it’s an incredible thing. And like you, I also use it… I write for a living, but I also use ChatGPT sometimes to rewrite some of the things I’ve done to make them punchier, write things in different styles, write things to social media. Like you, I use it nearly every day and sometimes joke that I talk to ChatGPT more than I talk to my wife, but I wouldn’t let her know that. In a professional sense then, how or has ChatGPT impacted your role in Siemens and Simcenter?

Kai Liu:

For my current works, I’m now a programme manager responsible for bringing ChatGPT or the generative AI in general to Simcenter tools. And for our customers, we often heard that our customers can never get enough speed, accuracy and ease of use. And what ChatGPT can help us in this regard is to make Simcenter tools more easy to use. And we have done a lot of innovation projects in the past regarding AI, ML. We believe the conventional AI, ML can help our customer decide their projects or to make faster decisions. It’s all about simulation democratisation. If we have generative AI, we can really democratise the simulation tools in this case.

Stephen:

And the truth is that we invest a lot of money in making our Simcenter tools easy to use. Yeah. And they are much easier to use than lots of our competitors tools in the market, but getting easier to use all the time. But CA tools are also really difficult. They’re complicated, aren’t they? Modelling and predicting physics involves making lots of modelling choices and that’s always been really difficult. And I think these kind of LLMs like ChatGPT can do is help us to talk to simulation tools using our natural language.

To like I’ve said before, I think is that I’ve been in the game for 30 years. The generation of simulation engineers before me were using punch cards to talk to their simulation models. I’m the command line guy. I did CFD simulation before the point and click GUI era which we used to have to learn the language of the computer and type it on the command line.

Then there was the whole graphical user interface, which is still going on today. And now, we’re getting to a stage where we can actually communicate with computers and with simulations just by talking to them using English or French or german or whatever language you choose. And I think that’s a really powerful change because like you say, it makes it really accessible. Do you think it’s going to be a step change in the future in engineering simulation?

Kai Liu:

Absolutely. It’s really about simulation, democratisation, as I said before, and ChatGPT or generative AI gives us essentially two capabilities. One is natural user interface, as you have already said. Now we have to use command line interface or graphical user interface. And with generative AI, we will get a natural user interface. Whether you are using a natural language like English, plain English to talk to the system. Or maybe in the future you can use pictures, you can use speech to interact with the system. But it’s changing the interaction, how you interact with our simulation system.

And also, the generative AI, ChatGPT is giving us a reasoning engine. It can really understand what we’re asking the system to do. It’s really amazing. You can think about now we have to master a lot of skills about our simulation tools. You have to, for example, if you are doing systems, you need to master how to use the armour thing.

In our case, you are doing fluid and thermal simulation, you need to know how to use those [inaudible 00:08:00] and plus. But this implies a lot of training. You need to really take a lot of courses. And also if you are really a simulation beginner, it really takes time to do this. And if we would’ve this kind of reasoning engine, we could just ask questions. And we can imagine in the future that we’ll get a agent which can pick the right tool for you to finish your tasks. That’s really reason.

Stephen:

Yeah. And so for me, I think one of the interesting things about ChatGPT is it’s like a conversation, isn’t it? And sometimes because you are trying to explain to ChatGPT exactly what you want to do, it’s almost like to really understand thing, you have to be able to teach other people. And so you sit down and you have across a few ChatGPT prompts, you explain exactly what you want to do, how you want to do it, what the parameters are. And that’s a really powerful experience as well.

I’ve always described it as having an attentive intern who’s going to listen to everything you say and won’t mind you correcting any mistakes as well. And so I think having that kind of conversational approach to engineering, it seems much more intuitive to me than actually trying to navigate through menus, trying to find a button for a turbulence model.

Saying “I want to solve this problem using these models. How would you do it and how would you advise me?” is a much more powerful and intuitive way of doing engineering I think. Lots of these things, this journey to be implementing LLMs or ChatGPT type clients into our software, it’s going to be a long journey, isn’t it? We’re not going to be able to do it overnight. We’re just saying this technology’s been around for just over a year, Hi Simcenter, which you are one of the authors of. Can you tell us a bit about Hi Simcenter, please, Kai, and how that works?

Kai Liu:

Sure. Actually the main motivation of creating this Hi Simcenter was I believe 3 March. And Wolfram published their Wolfram Alpha demo based on ChatGPT. And at that time we saw that it was really with this kind of capability of ChatGPT, we are able to connect the engineering world with ChatGPT. And to make use of the computational power of engineering platform and the conversational power for ChatGPT to help us finish some mathematical problems.

We thought that actually we can do some similar in the SimCenter as well with some talented engineers with some of my colleagues. We sit together. Give some ideation to see, okay, how we could bring the Simcenter to ChatGPT as well or another round to add ChatGPT to SimCenter and to create a conversational user interface for Simcenter.

After a few days work, we have seen that ChatGPT was able to really autonomously call the APIs that we provided to it. And those APIs drive the SimCenter to finishing tasks that we gave to ChatGPT. Behind the scene, it was not only power of the ChatGPT that enables this capability, but also the combination of generative AI and engineering power. We added a lot of AI asset behind the Hi SimCenter, for example, recommender systems or an unity graph that holds a lot of domain knowledge. And recommender system helps the reason energy behind that to really reason what kind of components can be added as a next in a sketch. That’s really amazing. This combination of ChatGPT and Simcenter makes a true much more powerful, makes the interaction to the SimCenter much, much more natural.

Stephen:

Yeah. Right. Just type Hi SimCenter into Google or look at the show notes for this podcast. It’s just a short YouTube video. But what it does is basically the only real command prompt you type, “Hi Simcenter, could you please create an example of a seven degree of freedom chassis model using the most popular components from past projects. Do the step-by-step and connect the components that are currently unconnected and follow the Amesim workflow to complete the task and ensure that the simulation can be initiated.”

And so that single prompt, you type that into the Hi SimCenter LLM prompt, and then when you asked ChatGPT to write a programme, it goes away into the background, it goes through the various prompts. And then in front of your eyes builds this quite complicated Amesim model. And I think that’s a really incredible demonstration of the potential of this technology. Where are we on the path to implementing that into SimCenter,

Kai Liu:

The ChatGPT or generated AI technology is relatively new for us. We’re still learning how we can bring this technology into our product to let our customers benefit from this technology. And basically we love the two categories of bringing this technology into our product. The first category is to use the documentation, user documentation as a data source and to help our users answer their questions. Then the chatbot would answer the question based on documentations.

Yeah, this capability requires that we use the user’s guide for example, or knowledge based articles that we have in our data store and to help the user answer documentation based questions. And this we’ll see as a low hanging fruits. This also what ChatGPT is able to do very well.

And the second category is more sophisticated. That use case is we can let ChatGPT to create models, run simulations and also to automate the workflows. That would require that we integrate our tools to the ChatGPT. For example, exposing the APIs of our tools to ChatGPT to make a connection between the LLM and our tools.

And this would also imply that we have to add a lot of domain knowledge to ChatGPT, for example, by connecting a knowledge graph or connecting anthologies. And for our customer, we want to also bring reliable solutions with generative AI technologies to let user feel confident with our tools and without having a lot of hallucinations from ChatGPT. And this would require that we add a lot of additional layers between the ChatGPT and our tool to reduce the hallucinations.

The second category I would say is still in the exploration phase. Now we are forming a team where we are trying to exploring different technologies and that’s actually where starting the feasibility of this technologies in the future.

Stephen:

You mentioned reliability there. And I guess one of the challenges with LLMs, like ChatGPT, is that there’s a degree of randomness in the way that they produce their answer. If I ask Google the same question five times in a row, I’ll get the same answer five times.

If I ask ChatGPT to do something for me five times in a row, each time it does, it’ll do it in a subtly different way and sometimes in a really different way and you get different endpoints. I guess that’s going to be one of the challenges, isn’t it? Because in engineering you want a repeatable process. And so, at the moment these large language models tend to have a degree of randomness, which might be less than helpful. Is that something we’re going to have to overcome?

Kai Liu:

Yes, absolutely. I think for the engineering, very important to always have some deterministic answers, but ChatGPT has been made with a lot of creativity there. If you ask your questions, so you can’t guarantee that you’ll always get the same answers. But there are several technologies that can minimise this kind of randomness. You can for example, lower the temperature of the ChatGPT and to make it less creative else. This is one way to do this.

Another way to do this is to add a domain knowledge to ChatGPT because ChatGPT was trained with the public data. And if you don’t add additional knowledge or the context to ChatGPT, it just uses what it has learned in the past with public data to answer your questions. This can generate a lot of randomness. But if you add additional context to ask ChatGPT, “Please answer the question based on the information that I provided. And then don’t be very creative.” This would reduce this kind of randomness.

And this technology is also called retrieval augmented generation, meaning that you first retrieve the information from your domain and use the retrieved data as a context and then ask ChatGPT only answer question based on the information you provided. This would really reduce this kind of randomness.

Stephen:

I think one of the things you’re alluding to a bit there is this concept of AI drift, isn’t it? In that LLMs and artificial intelligence also learn from responses. That’s part of the training, isn’t it? Is that your LLM is learning from the responses, it gets to human users of the system. Is there an issue with LLMs learning material or using material that might be confidential or protected as intellectual property? Is that going to be a concern in using LLMs in a commercial kind of sense?

Kai Liu:

Yes, it’s big concern I would say at the moment that we see LLMs can be applied in two different set of use cases. I would say the first for the internal developers. And we’re trying to bring LLMs for improving the internal developer’s productivity.

For example, you can use GitHub copilot and to produce code for our developers. But we are still not sure whether we are able to use the produced code because it can violate the intellectual property for other companies. Or for example, you might violate an open source licences and so on.

And for this type of use cases, we are relying on our IT department because our IT department is now working very hard to make sure that the usage of ChatGPT or usage of this kind of LLMs is compliant to our company policy. The second use case is if we’d like to provide solutions to our customers and our customer will question whether their data is secure.

Then for that case, you might have seen that SIEMENS has launched a partnership with Microsoft. And to create a SIEMENS industry copilot. And I would believe with this partnership, our customer will get more confident in the future and to get a more reliable solution from siemens.

Stephen:

One of the challenges of using ChatGPT as just a regular person, yeah, not in a professional context, is that it moves so quickly and changes so quickly and evolves so quickly. We’ve seen a huge amount of evolution in ChatGPT in the past 12 months, from being a simple prompt to start with to something that can now search the internet, that you can use to write code, that you can use to interpret data. Is that pace of development difficult for you to keep up with as you try and integrate it into our solutions?

Kai Liu:

We have seen a lot of information around the ChatGPT and we have seen that a lot of people are trying to bring their own solutions to market. They train their own models. And also a lot of the research papers have been published around ChatGPT. The information is really exploding.

But for us as a innovation engineer, we like to filter the information that are really interesting for our customers. And also this is a little bit challenging in this case, but we’re trying very hard to not be disturbed by the informations that are not relevant for our customers. It’s still a work in progress. It’s not easy, but we’re trying very hard.

Stephen:

It’s a question of focus and concentrating on what’s important and what delivers the biggest value for SimCenter users. A thing we wouldn’t have talked about a year ago is prompt engineering. I’m sure if you looked at the Google trends history, the word prompt engineering wasn’t a word. It wasn’t a phrase a year ago.

And now if I look at my social media, it’s full of people trying to describe the best prompts to get the best out of ChatGPT and other LLMs as well. Which is interesting, isn’t it? Because I guess in the future some of those prompts could be, I’d like to perform a CFD simulation on a car. What turbulence models would you suggest? What sort of domain size do I need? Could you help me make a mesh?

Hopefully, in the next four or five years we might be able to prompt ChatGPT to help us build simulation models. Do you think that’s a realistic ambition?

Kai Liu:

Yes. I think now we have seen that the OpenAI is trying to add in our capability to allow everyone to build their own ChatGPT models. It also implies that we might very easily to add a simulation capability builder. For example, simulation GPT as well for our own tools. I think in the next few years we could foreseeing that the ecosystem of LLMs will be much, much larger than today. And we will see a lot of customised large language models for a lot of domains.

And so large model is just like a recent engine. We have several foundation models which are coming from large companies like Microsoft or Google. But we will see a much larger ecosystem around this foundation models dedicated for each domain. And this is a trend I think that we can already predict. Yes, I think the customization is the way that we will see the future of the ChatGPT. Yeah.

Stephen:

We’ve talked largely about all the positive things to do with AI. There’s also a popular narrative that AI will cost people their jobs, it’ll be the end of engineers. What would you say to people who think that AI is going to end engineering as a profession? Can you see that happening?

Kai Liu:

I think this is a really good question whether the AI will replace us because you have over 30 years of experience in engineering, and I have 20 years of engineering experience. But I don’t think AI will replace us, and we still need a lot of foundation knowledge to understand what AI will produce for us. And what AI will help us is to improve our productivity and we’ll still need engineers in the future.

Stephen:

I agree completely. Because I think the truth of engineering is that none of us have enough time. None of us have enough resources to do the amount of engineering simulation and engineering innovation that we’d actually like to do. And those limits mean that we’re not as productive as we’d like to be. And anything that helps us increase our efficiency, will help the world by helping us design better products, more robust products, more innovative products.

And I think that this technology is the technology which has the biggest potential for accelerating the rate of innovation. Because as you said earlier, it democratises simulation. It makes it more accessible, it makes it more efficient. And also it helps in those cross domain challenges. I’m a CFD guy. Yeah. If I want to do some finite element simulations, I’d have to generally go and talk to a finite element engineer.

But you can imagine using AI and machine learning, that’s going to become more accessible to me. In the same way as I can suddenly programme in Python as good as I need to using ChatGPT’s help. And that will help engineers access other disciplines and other domains as well and completely improve the quality of engineering. I think all of us waste too much time setting up models, building meshes, building 1D systems. Yeah. When really what we’re paid to do is understand the data and use that to inform design decisions and basically fuel innovation. And I can only imagine that this technology will help us to do that.

Yeah, it’s something that really excites me. We’ve had a year of ChatGPT. We couldn’t have imagined how much progress we’ve made in the last year. If you and I had this conversation a year ago, it would’ve seemed like science fiction, crazy. If we went forward, say another nine years, so the 10th anniversary of ChatGPT, Kai, can you imagine then that engineers will be using LLMs as a standard method of interacting with the simulations? Is that a long enough timeframe or do you think it will happen before then?

Kai Liu:

I would say nine or 10 years would be too long. And I even see in next four or five years, and we can see obviously a big change in the engineering world. Actually for all of us, what it means, yeah, I would say a big change. And of course, it’s hard to predict in the future. And LLM is like a operating system in the future.

In the past we have seen that the computer technology has been involved. And we have built a lot of ecosystems, so around the computers. But in the next four or five years, we could see a lot of ecosystems built around the LLMs. And also the skills required for the engineers would be different than today. And there are engineers who are able to use AI that would’ve much more benefit than people who don’t don’t know how to use AIs. It’s just a question of what and how.

And now we are asking people to learn a lot of tools, but in the future we would ask people to learn how to use the AI tools to accomplish their tasks. They need to understand how to use the Ais. And instead of learning a lot of new technologies, because the AI is actually doing this hard question. I’d like to reference what our big boss said, the AI will not replace you, but the people who use AI will.

Stephen:

Which is a good prompt I think for everybody to stop listening to this podcast now, to go away. Open ChatGPT and to do some clever stuff with ChatGPT, if you’re not doing it every day, like myself and Kai are. This is moving so quickly. I don’t think we’re going to wait five years to talk to you again, Kai. Maybe the next year we can come on and we can talk about how ChatGPT has changed during its second year. And we can talk about how much progress you are making, including those and implementing those in SimCenter solutions. That’s been a really excellent interview. Thank you very much. And thank you for listening to Engineer Innovation.

Speaker 3:

This episode of the Engineer Innovation Podcast is powered by SimCenter. Turn product complexity into a competitive advantage with Simcenter solutions that empower your engineering teams to push the boundaries, solve the toughest problems, and bring innovations to market faster.

Stephen Ferguson – Host

Stephen Ferguson – Host

Stephen Ferguson is a fluid-dynamicist with more than 30 years of experience in applying advanced simulation to the most challenging problems that engineering has to offer for companies such as WS Atkins, BMW and CD-adapco and Siemens.

Kai Liu

Kai Liu

Kai Liu is a seasoned research engineer with 15+ years’ of experience in the industrial field. He is passionate about investigating cutting-edge technologies and studying feasibility for potential product opportunities, and spent the last 10 years helping introduce various advanced technologies into product development.


Take a listen to a previous episode of the Engineer Innovation Podcast: Engineer Innovation: Taking Agriculture Underground with Jamil Madanat on Apple Podcasts

Engineer Innovation Podcast Podcast

Engineer Innovation Podcast

A podcast series for engineers by engineers, Engineer Innovation focuses on how simulation and testing can help you drive innovation into your products and deliver the products of tomorrow, today.

Listen on:

Stephen Ferguson

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/podcasts/engineer-innovation/how-chatgpt-is-redefining-the-future-of-engineering-simulation-with-kai-liu/