Listen to the Engineer Innovation podcast on
In this episode we explore one of the best examples of a living (and literally) breathing digital twin, a “lung in the loop” model that allows a single ventilator to be used to assist the breathing of multiple patients.
We’re joined by Daniel Reed, Senior R&D Project Manager at MxD, a public-private partnership that matches federal investment with private investment to advance digital manufacturing technology for the US manufacturing industry.
We talk about:
– Working in MxD’s “future factory” in Chicago.
– Using sensors and networks to create smarter factories.
– The role of the digital twin in the factory of the future.
– The “lung in the loop” model — using digital twin to deal with ventilator shortages during Covid.
– How the digital twin processes data.
– The potential of having digital twin in hospitals around the world.
– Why isn’t there more data sharing in manufacturing right now?
– What’s next with the lung in the loop model?
– The exciting future of the digital twin.
This episode of the Engineer Innovation podcast is brought to you by Siemens Digital Industries Software — bringing electronics, engineering and manufacturing together to build a better digital future.
If you enjoyed this episode, please leave a 5-star review to help get the word out about the show.
For more unique insights on all kinds of cutting-edge topics, tune into siemens.com/simcenter-podcast.
- Working in MxD’s “future factory” in Chicago.
- Using sensors and networks to create smarter factories.
- The role of the digital twin in the factory of the future.
- The “lung in the loop” model — using digital twin to deal with ventilator shortages during Covid.
- How the digital twin process data.
- The potential of having digital twin in hospitals around the world.
- Why isn’t there more data sharing in manufacturing right now?
- What’s next with the lung in the loop model?
- The exciting future of digital twin.
Daniel Reed (00:00):
When you’re building a simulation, you could never perfectly simulate the real world. You’d have to be simulating quantum mechanics, and we don’t have the computing power for that, and we know that our simulations aren’t totally correct.
So the digital twin enhances that simulation by capturing data from the real physical system and learning about what a normal operation is, so that you can then use that model and that understanding to do all kinds of different things once you have that enhanced, rich digital twin model.
Stephen Ferguson (00:48):
My name is Stephen Ferguson, and you’re listening to the Engineer Innovation podcast. In this episode, my guest is Daniel Reed from MxD, and, together, we explore what I think is one of the best examples of a real-life digital twin.
If you can bear to cast your minds back to the deepest, darkest days of the Covid pandemic, you might remember that there was a worldwide shortage of mechanical ventilators used to treat those patients with the most severe Covid-induced breathing difficulties.
Now, mechanical ventilators are designed only to assist the breathing of a single patient, which is why Daniel and the team at MxD have come up with a digital twin, which demonstrates the feasibility of using a single ventilator to assist the breathing of two patients.
As I said at the beginning, this is one of the best examples of a real-life digital twin. It includes a virtual model, a physical asset. It includes simulation, test, machine learning, edge computing, as well as the processing of lots and lots of data on the cloud.
I also talk with Daniel about all of the ways in which simulation, test, and artificial intelligence will play a role in the factory of the future. And as you’ll find out in just a minute, Daniel is a very interesting and incredibly charismatic guest. I think this is one of my favourite interviews for the Engineer Innovation Podcast. Enjoy your listen.
Welcome, Daniel. Can you tell me a bit about yourself and a bit about MxD, please?
Daniel Reed (02:19):
Yeah, absolutely. I am a technical programme manager here, which means I lead technical projects. MxD is a public-private partnership, essentially a not-for-profit that matches a dollar of federal investment with a dollar of private industry investment. And we use that collaboration and that kind of co-investment to advance digital manufacturing technology for the US manufacturing industry writ large.
We really are of the belief that a rising tide lifts all boats. We’re one of a network of these kinds of manufacturing innovation institutes around the country, but our focus is specifically around digital manufacturing, cybersecurity, and the digital thread.
Stephen Ferguson (03:06):
And actually, we’re talking today, you’re in the future factory. Where are you based? In Chicago, is it?
Daniel Reed (03:11):
Absolutely. Yeah, in Chicago. So we have a 55,000-square-foot future factory space here. That factory space is divvied up between our members, who bring some of their most advanced technology here to showcase it and teach others about it. We also have project outcomes on the floor, so projects that MxD scopes and funds and executes. Some of them have real, tangible outcomes like test beds, demonstrators, or other types of physical assets. So some of those are behind me.
We’ve got a real mix here of stuff that MxD has designed and engineered for the purposes of educating the public about manufacturing, all the way down to really crunchy, what we would call a test bed, like our advanced wireless test bed, where we have essentially every kind of wireless protocol, so Wi-Fi, 4G, 5G, Bluetooth, Loran.
And someone could come and bring their devices that they’re contemplating networking and try it on every protocol and go, “Okay, well, this one’s maybe good for that or maybe bad for this, depending on your bandwidth needs or the ability to avoid occlusions.”
And so, this space is very multifunctional. We host events. We do all kinds of cool things. The only thing we can’t do is eat and drink on the factory floor, which is too bad because after a few drinks, that’s when we’d have the really good ideas.
Stephen Ferguson (04:32):
A naive view of factories might be that they’re kind of dirty, mucky places, but what you’re talking about is having all different types of networks and sensor signals. How does that fit into the modern factory?
Daniel Reed (04:43):
Yeah, absolutely, I think there is a perception of factories as kind of a dirty, difficult, dangerous job, but I think that’s where digital technology and automation can really actually change that script.
So our factory floor, like you can see, is bright and gleaming and shiny. Modern factory jobs are very much advanced jobs that involve both physical manufacturing technology, but also an understanding of digital technology, of data. Increasingly, that is going to be the source of competitive advantage for manufacturers.
Typically, a manufacturer who’s collecting a lot of data about their operations, a lot of their intellectual property is baked into that data, their expertise at doing things. But up until, I’d say, more recently, that has generally been locked away in the heads of those who are very experienced, that kind of tribal knowledge.
But with the advent of data, more and more, those data are being used to drive insights and decision-making, to mine that, what they call the next gold rush, or the new oil, is data, to make better decisions and to automate, to push down some of that automation onto the portions of the work that maybe are less suited for humans to do because they are dirty, difficult, or dangerous. And then, humans can focus more on the higher order improvement activities, efficiency activities.
Stephen Ferguson (06:11):
And I guess in order to generate that data, you must have sensors embedded in everything.
Daniel Reed (06:16):
Absolutely, yeah. And so, we’ve got different implementations. Most factories are brownfield, so their factory might have been there for 20, 30, 40 years, and they’re not able to buy all new equipment that is digitally enabled.
The good news is, you don’t have to. You can actually attach sensors to legacy equipment to gain a lot of very interesting, useful information without making any direct changes or needing to kind of go into the PLC, for example.
So, two of the very interesting examples on our floor, we have a test case where a small, little bit of Python code for machine vision was used to basically point a webcam, add an analogue dial gauge, and using vision recognition, to translate that analogue discontinuous signal into a digital signal that’s now coming out because it knows where the hand of the dial is.
Or we’ve got a very old Bridgeport milling centre. This is probably twice as old as I am, but we’ve added current sensors. We’ve added other sensors to it, to be able to pull even relatively simple analytics off of it, like is it on or is it off? Is it cutting or is it waiting? Is it spinning but not cutting?
And even with just that low level of resolution data, there’s so much that a manufacturer can do to understand, for example, how much am I using this machine? Should I upgrade it? Should I not? Should I get rid of it? Maybe I only use it once a month. So, there’s all kinds of things that can be done by either using newer, integrated equipment or by attaching sensors to existing legacy equipment.
Stephen Ferguson (07:52):
And the extreme of that, I guess, is, the final vision of that, is digital twins, I guess. So what role does digital twin play in the factory of the future?
Daniel Reed (08:00):
So digital twin is one of those things where if you ask 10 people what a digital twin is, you will probably get 11 answers. So my definition of it may not be the same one that everyone is used to using, but how I like to think of it is the digital twin is a simulation that’s enhanced with data from the real world, which is where those sensors come in.
So when you think of a simulation, normally you’re building a make-believe world and then injecting conditions into it. And you have to tell it how to behave and then the rules of the world it lives in, and then you can simulate what’s going to happen. A digital twin takes that one step further.
So when you’re building a simulation, you could never perfectly simulate the real world. You’d have to be simulating quantum mechanics, and we don’t have the computing power for that, and we know that our simulations aren’t totally correct.
So the digital twin enhances that simulation by capturing data from the real physical system and learning about what a normal operation is, so that you can then use that model and that understanding to do all kinds of different things once you have that enhanced rich digital twin model.
Stephen Ferguson (09:12):
And I guess one of the issues is the definition of what normal operating conditions are changes throughout the lifetime of the product or device that you’re working on, doesn’t it? Because we all spend a lot of time and effort simulating our products, our equipment, under a very rigorous set of operating conditions, and then it goes out into the field, and people use it in different ways, and perhaps sometimes deliberately use it in different ways. And I guess digital twin gives us the data to understand what happens when we push our devices or equipment beyond the standards it was designed for.
Daniel Reed (09:46):
Right, exactly. You can interrogate that digital twin to try to introduce different scenarios to it and see how it would react even if you maybe didn’t design that system for that set of conditions initially. That’s a great segue to what we’re talking about with our lungs in the loop demo because that’s exactly what we did was integrating digital twins into product development and in particular, redevelopment.
So like you said, you might be using a system for a novel use case. It’s not designed for that, but with a digital twin, you can relatively simply go in, change those operating conditions and see what the impact is. And like you just mentioned, that’s so important because sometimes there’s latent behaviour that you’re maybe not even aware of.
One of the stories that I like to tell is a plant was finding that they were having really inconsistent outputs from the same production line, but they weren’t changing any parameters. Sometimes the parts were in spec, sometimes they were out of spec, couldn’t figure out why.
It wasn’t until they integrated humidity sensors that they understood what was happening was it was literally based on the weather and how much moisture was in the factory at the time, and the parts, how much water they were absorbing from the air. So, having that visibility down to that level can really help you get to the root cause.
Stephen Ferguson (11:05):
Which is incredible. So, for lungs in the loop, what was the core problem that you were trying to solve? We want to forget about Covid, I know we do. We’re going back to the Covid pandemic, then, when there was a shortage of ventilation equipment, wasn’t there, or ventilators?
Daniel Reed (11:18):
Yeah, you’re exactly right. So, in the early days of the pandemic, there were a great many patients showing up to hospitals with respiratory issues who needed to be ventilated, but the nation’s stock of ventilators was too low. These are complicated, intricate pieces of medical devices. You can’t just turn a switch and crank out a hundred thousand of them.
So what was happening at the time, in the real world, doctors were having to make this difficult, impossible, kind of, triage decision to place more than one patient on a single ventilator. That is not an approved use. The machine was not designed for that. But when your patient is going to die if they don’t have a ventilator, obviously you got to do something.
And so, that was kind of the thesis that started this particular project. It’s one that we worked with Siemens on. This came through CARES Act funding. Everyone remembers the CARES Act for the big things that it did for most people in terms of unemployment, but it also contained a lot of investment into research and development to try to prevent these kinds of problems from happening again. How can we do this better next time?
And that is what this, the lungs in the loop and the digital methodology framework project, came out of. We call it rapid and secure deployment of medical devices and instrumentation, but that’s way too much of a mouthful. So let’s just call it lungs in the loop.
Stephen Ferguson (12:44):
Yeah. So the project, basically, what were you helping to design? What was your involvement in this?
Daniel Reed (12:51):
So, like I had mentioned, the ventilator is a medical device. It’s not approved for use on two patients, but folks were having to do that. However, what nobody could really answer the question of is, is that safe? And if so, when is it safe and when is it not safe? When is it okay for two people to be on the same machine, when is it not, because how the machine operates essentially, that the two patients need to be kind of in similar conditions.
But doctors weren’t really able to get that kind of guidance, and so, what we partnered with Siemens to do was to take a holistic look at the product design for a ventilator, and using a requirements management system called Polarion to basically create a design fork that would allow us to draw out what the differences were going to be between one patient versus two patients, build a digital twin of the machine operating in both conditions.
And then, using that, we were able to then prove, okay, this is when it’s safe, this is how it’s safe, and actually then detect some anomalies that were occurring, potential patient health deterioration. So, start to finish, basically a medical product redesign and showing how you can use digital technology and digital twins to accelerate that process, make it safer and do it with better confidence.
Stephen Ferguson (14:16):
So at the start of this, you kindly provided us with your definition of digital twin. How does that apply to this lungs in the loop case then? So what does your digital twin actually consist of?
Daniel Reed (14:27):
So it has a couple of components, and I’ll try to keep it relatively brief. The digital twin consists of essentially a one-dimensional simulation of the ventilator operation and how gas flows through it. So, basically, that’s like a piping diagram, more or less, an Amesim model. And in that model, we are computing based on the operating conditions of the system, what we think the tidal volume could be for each patient. The tidal volume is how much breath is going in and coming out for both people.
And so, the difference is that a ventilator is not designed to be used on two patients, so it doesn’t have sensors or anything like that built into it for two-patient operation. When we built this prototype, the team put sensors in the operating line at critical points so that we would be able to see how much air was flowing past that point in real time.
So you’ve got this simulation model that is downscaled to a smaller model that can run in real time. It is taking data off of the real physical asset, it’s measuring airflow and then it’s comparing that to what it believes the airflow should be based on its directed operating conditions. And so, that is how it’s able to detect, okay, this is working how I expect, or this is not working how I expect. Something may be wrong.
Stephen Ferguson (15:59):
So, I guess the ventilator is driving both patients to breathe with the same frequency, doesn’t it? Is that how it’s working?
Daniel Reed (16:05):
That’s exactly right, yeah. So both patients would be typically sedated at this point and they’re breathing in the same rhythm. It’s happening at the same time. So it’s the same amount, same volume of air that’s being split and going to two people. And that’s exactly where some of the most interesting learnings from this digital twin came from because there’s a lot of built-in safety systems in a ventilator. It’s a medical device, but those safety systems are all tuned around one patient.
Here’s a great example, and this is the use case that we demonstrate on the digital twin, or the live twin as we’re calling it because it’s running right alongside in real time with the digital twin. So remember when I said there’s 10 different definitions, right? You can have digital twins that are running alongside the equipment in real time or offsite, in the cloud, different applications.
Speaker 3 (16:55):
This episode of the Engineer Innovation podcast is brought to you by Siemens Digital Industries Software, bringing electronics, engineering, and manufacturing together to build a better digital future. If you enjoyed listening to this episode, please leave a five-star review to help get the word out about the show. For more unique insights on all kinds of cutting-edge topics, tune in to siemens.com/simcenter-podcast.
Stephen Ferguson (17:23):
I think Siemens are calling that executable digital twins, but we’ll stick with live twin because that’s your definition.
Daniel Reed (17:28):
There we go, 12. Now we’re up to 12.
Stephen Ferguson (17:31):
And so, you’re monitoring the pressure and the temperature. You’re also, inside the 1D model, you’re simulating the ventilator, and I guess both the patients as well. They have to be part of this as well.
Daniel Reed (17:41):
Yes. So, for the real physical system, the prototype system, we’re using lung simulators, which I didn’t even know that was a thing until I started this project. Apparently, there’s such a thing as a lung simulator. It’s used for ventilator product design. Who knew? We bought two of these lungs. I was hoping they would look a little grosser, but, unfortunately, it just looks like a bread box. It looks like a bread machine.
So we’re simulating the patient by virtue of those, simulating those lung simulators. We’re simulating the ventilator, how it’s operating, and we’re watching the airflow as it goes through those pipes. So we’ve got two use cases here that’s showing the difference between what you can do with on-device monitoring versus something like a digital twin like this.
So within the test loop, there is a leak device that basically introduces a small leak into the line. The ventilator is pushing air, pulling air back in. Turns out that the ventilator can detect that leak because it knows how much is going out and coming in. Regardless of whether you’re running one patient or two patients, it knows that air is escaping from the system somewhere. And so, if we turn on the leak detector, within a few seconds, an alarm will go off on the ventilator saying, “Hey, there’s a leak.”
On the other hand, one of the other things that has been learned from clinicians in the intervening period is that in order for two patients to be ventilated on the same machine, they have to be in similar lung conditions, what’s called lung compliant.
So in order for both of you to be fed from the same machine, your lungs have to be able to take in air at the same pressure, so you’re not pushing too much to one person or not enough to the other. And we can simulate that too because we have lung simulators. So we can simulate someone’s condition deteriorating, or them getting sicker.
But what you notice is that if we do that, if we simulate having one patient that’s very sick, one patient that’s not that sick, and then run the machine, operating, the ventilator will never throw an alarm, even though one of those two patients is not getting enough air. The ventilator doesn’t know that because the ventilator wasn’t designed to work that way.
However, the digital twin is, the digital twin does, because it knows what normal operating is. It’s learned over time what the appropriate kind of operating conditions for the machine are. So it will throw an error. It will issue a warning and say, “Hey, one of these two people doesn’t look like they’re getting enough air. Their lung condition may have deteriorated. These two patients, you may not be able to put them on the same ventilator anymore.”
So that is, I think, a really powerful example of what this technology lets you do that the existing state-of-the-art wasn’t there for.
Stephen Ferguson (20:33):
So the digital twin has given you more insight than you get off the regular equipment. How’s digital twins making that decision? So you’re collecting data from the sensors, how’s that data being processed and coming to that decision?
Daniel Reed (20:45):
So there is an edge device within the system that’s basically a small computer that’s running locally that that model of operation is running on. And it has that 1D simulation that I was mentioning before. Air has to go somewhere. It can’t just disappear. Mass flow can’t just disappear. Because of that, we know, or at least we expect it to be in a particular place at a particular time based on if everything’s working how we think it should, this is how the system should behave.
And then once that behaviour is known by the system, when the operating conditions, the real operating conditions are functioning outside of the range of what we consider to be tolerable, normal operating conditions, then that system can alert us.
Now, digital twins, generally, they don’t necessarily… kind of like AI because they use AI oftentimes, they don’t maybe know why that’s behaving incorrectly. So usually a subject matter expert has to put some context around that to say, “Oh, it’s happening because of this or because of that.” But the digital twin is just monitoring the data and saying, “This is not behaving how I would expect it to, given what you’ve told me about how this is supposed to work. Is something wrong?”
And that’s then the human intelligence comes in to say a surgeon, or a respiratory therapist who knows, okay, this is what’s happening. I’m not a medical doctor. I learned all of this from the experts who were able to add that context to what the digital twin was noticing was abnormal.
Stephen Ferguson (22:17):
I guess the benefit of digital twin, of course, is that you’re testing this equipment in your future factory space, but if it was deployed in hospitals all around the world, you could be accumulating all that data all the time, couldn’t you? You could be spotting the times when digital twin identified an error and wasn’t an error, and you could be learning from pushing that equipment beyond its normal operating space as well.
So I guess that’s the benefit is you’re doing it at the moment, only you’re doing it on a single digital twin, but you could have hundreds or thousands of these in hospitals across the world, couldn’t you?
Daniel Reed (22:49):
Absolutely. So you’ve touched on a really interesting point there. We could probably spend a long time talking about that, but let’s stay just in the realm of what you’re talking about. If you have the same digital twin, all ventilators kind of work the same, more or less, but maybe a ventilator in say New Jersey, the ventilator’s breaking down.
And so, if you’ve got a digital twin that’s deployed across your entire fleet of equipment, now you can start to learn things about, “Okay, this is what it looks like before the machine breaks down.” We can start doing predictive maintenance or predictive quality. Or the machine designers can learn things from that data to say, “Hey, we didn’t design it for this condition. Or maybe we did, but something’s happening that we didn’t expect. Let’s make an update, next version of this, maybe we make a design change.”
And that’s a fascinating topic because the power of the data, potential of the data is huge, but there’s also risk. There’s also danger associated with that. This is real people’s data. This is real people’s health information that we are collecting, and it’s not okay for that not to be cyber secure.
So, when you get into data in what we call a connected care setting, so where the machinery, the care devices that are being used on a patient are connected into a digital framework, all kinds of concerns, reasonable concerns about how can you protect this data, how can we extract the value from it without putting individual patient privacy at risk?
Stephen Ferguson (24:25):
Yeah, I guess it’s different when we’re talking about a digital twin of your Tesla or your electric car, where everybody doesn’t mind sharing the data when it’s their personal health data. So there are definitely ethical considerations with this, which I hadn’t considered as well. So that’s really interesting.
So you’ve eloquently described the benefits of sharing all this data, but there’s not lots of data sharing happening at the moment. Why isn’t there more data sharing, do you think?
Daniel Reed (24:49):
In my opinion, it’s because for most manufacturers, who are using machinery to make parts, really the component and the process by which they’re making it is that company’s intellectual property. That’s their secret sauce.
And so, even though the company that makes the milling machine might be very interested in understanding how are people using this machine, when does it break down, how can we do better, the manufacturer who bought it doesn’t want to share that data back with the machine designer because it’s their intellectual property.
What’s interesting is that in the medical device space, this isn’t totally true. There’s been a fairly robust history of data sharing back with medical device manufacturers from hospitals because they’re not using it differently. There’s no secret sauce. Everybody uses a ventilator on a patient the same way.
And so, this is providing that kind of collaboration space where in a healthcare setting, when you can overcome some of the privacy issues with patient data, there’s actually more of a culture of willingness to share because we’re all basically in this together. Everybody wants the same thing, which is a better outcome for patients.
Stephen Ferguson (26:05):
What’s next with the lung in the loop model? Is it a project which is finished or are you going to do some further work on it?
Daniel Reed (26:10):
So we are actually going to be extending it. We’re going to be doing some further work. We’re adding an active control device, so actually adding, not just sensors, but actually a control device that allows us to switch between different patient operating modes, and all of that will be incorporated into the digital twin.
One of the really cool aspects of how the follow-on project is going to work is that with digital twins, because it’s a model, you can model not just physical systems and physical behaviour, but you can include software in your digital twin. You can package the code that is controlling that control system, package it up, load it into the digital twin, and effectively, you’re digital twinning the software control in addition to the physical operation.
That is super powerful, for example, cases like this where you maybe wouldn’t be able to test something ethically in a particular case, but you are able to make sure that your code is robust for all different use cases. One of the examples people use is with a braking system. Braking systems nowadays, or most machinery, it’s not just hardware, hardware and it’s software. And in the digital twin, you can pull both of those in, unify them and test them both in that environment.
Stephen Ferguson (27:32):
Which is amazing because I think engineers from my generation, we grew up, we did simulation, we validated those simulations against a small set of test data and then put it out into the real world. Now we’re validating our models, our control systems with a continuous stream of data as well. And that’s incredibly valuable, isn’t it? It’s a quantum leap in our understanding of the operation of this equipment, but only if we can process the masses of data.
So data’s only useful, I guess, as long as we’re making decisions from it. And I guess that’s the other challenge of digital twins, isn’t it? Is processing and understanding and analysing and making good decisions out of huge amounts of data?
Daniel Reed (28:11):
Absolutely right, and picking the parts that are useful to you and keeping those and trying to sift through the enormous volume of information. You could put sensors everywhere, but maybe that’s not valuable. More important to target the specific functionality that you’re trying to understand more about.
Digital twins in general, one of their good use cases is you’ve got a complex system, you’ve got data coming out of that complex system. There’s behaviour maybe you don’t totally understand or something is happening, a digital twin is a great use case for that. Or for, like you say, product simulation, simulation on different test samples.
The example I give with this project is imagine that a ventilator manufacturer had this system set up ahead of time, before Covid. They’ve already got their design in a integrated requirements management system, like Polarion. They’ve got all of their behaviour simulated, and that’s tied back to performance, regulatory, functional requirements. And then Covid-19 hits, and somebody picks up the phone, a hospitalist picks up the phone and says, “I want to run two patients on one ventilator. What do I need to know? Is it safe? When is it safe? What should I know?”
With the digital twin, with the model that’s integrated, they can basically take a branch of that design, understand I changed this part of the design, what do I need to double-check? What’s going to change based on going from one to two people? And revalidate the stuff that you have to in your simulation to not revalidate the stuff you don’t have to and be able to answer with confidence, “Hey, you can do it, but only if the patients are similarly sick, and here’s how to keep track of if that’s true or not. And if it changes, here’s what you should do.”
The power of being able to very quickly and virtually do that, as opposed to in medical devices, regulatory environments, they’re very controlled. Change is very controlled, as it should be, for safety, but having the power of being able to do that virtually lets you do it when you might not otherwise be able to safely and quickly give those answers.
Stephen Ferguson (30:23):
I’ve written an article about what I think is the first digital twin, which is the Apollo 13 disaster where NASA was stuck with a spacecraft with three people on board, a very long way from home, and had to reconfigure all their simulators to be simulators of that spacecraft.
So the spacecraft had suffered some damage. It was operating differently than the spacecraft they had designed. And so, in Houston, they reconfigured all their simulators in real time. So they could do virtual experiments on how they were going to get these guys home without killing them, without having to test it on the men who were floating around the Moon in their space capsule as well.
So it’s not a new concept, but it’s one that is basically finding its time now and reaching fruition, isn’t it? And I think we’re going to see a lot more of it in the future, not least of all because of efforts of people like MxD, which is I think really incredible.
Daniel Reed (31:11):
Yeah, I think you’re completely right. So what’s different now is similar to how simulation technology itself has evolved over the years. Simulations used to require dedicated computing resources and all kinds of very resource-intensive things to do a simulation. So you only did it when you really, really had to. Now I can run an FEA simulation on my laptop, probably do it on my cell phone, and it costs me almost nothing.
Digital twins, I think, are following that same track, but they’re earlier in the process. So, as you pointed out, digital twin is not new, but the difference between then and now is back then, they had to build a physical copy of the system in order to replicate the physical behaviour of the other system that they were twinning. That’s a virtual twin, but also a real twin, a real physical twin.
Some of this work was done early on in, for example, safety testing, rail simulators, but they had to basically wire up a whole locomotive, a real locomotive. And so that’s profoundly expensive and challenging to do, and so the bar to entry is very high. But as computation costs come down, as we are able to do more of this virtually, rather than needing to build a physical system, the cost comes down.
So, all of a sudden, there’s better reason to do it for different kinds of things, more experimentation. So I think very much this idea is coming into its time based on the technology and what people are trying to do with it.
Stephen Ferguson (32:38):
Which, I think, is an excellent spot to end this podcast as well. So we were looking for examples of living, breathing, digital twins, and what you’ve given us is, I think, the best ever example of a breathing digital twin. So I want to thank you, Daniel. And for everybody who’s listened today, thank you for listening to the Engineer Innovation podcast.
Speaker 3 (32:57):
This episode of the Engineer Innovation podcast is powered by Simcenter. Turn product complexity into a competitive advantage with Simcenter solutions that empower your engineering teams to push the boundaries, solve the toughest problems, and bring innovations to market faster.
Daniel Reed is a Manager for Technical Projects at MxD. In this role, Daniel heads a small team responsible for managing and executing complex research and development projects among both industry and academic partners.
Take a listen to a previous episode of the Engineer Innovation Podcast: Boosting Norwegian Hydropower using Executable Digital Twin
Engineer Innovation Podcast
A podcast series for engineers by engineers, Engineer Innovation focuses on how simulation and testing can help you drive innovation into your products and deliver the products of tomorrow, today.