In this seventh episode of the Model-Based Matters podcast series, we discuss the crucial aspect of simulation and testing. There are many ways to simulate, matching the tests to the simulators. This episode discusses the design, control software or simulated physics. Also, there are various tests to see if a vehicle accelerates or a plane has an expected range and maneuverability. So, our experts will cover these issues and much more.
Tim Kinman joined us again as vice president of Trending Solutions and Global Program Lead for Systems Digitalization at Siemens Digital Industries Software. Also, we are privileged to speak with Michael Baloh and Jan Richter, also from Siemens Digital Industries Software. Today, the experts discuss the challenges that can be solved by automating tasks within the design process. Join us as we discuss the future of MBSE and the role that increased complexity will play in the world of electronics.
Read the transcript (below) or listen to the audio podcast.
Nicholas Finberg: Welcome back to Model-Based Matters. Siemens Software’s one-stop-shop for all things in Model-Based Systems Engineering. I’m your host, Nicholas Finberg. And I’m joined, as always, by Tim Kinman, our VP of Trending Solutions and Global Product Lead for Systems Digitalization here at Siemens. In the last couple of episodes, we took a bit of a detour to dive into the role of electronics development in today’s complex products. But we’re back to defining one of the significant pillars of MBSE: Continuous verification. Joining us this week are a familiar voice, Michael Baloh, from our episode on feature-based engineering, and Jan Richter, both of which are Controls Engineers here at Siemens.
In our last episode, we discussed the various aspects of continuous verification, things like simulation, numerical analysis, product testing, and even business analysis. So, let’s move on to the step in the process.
Okay, well, let me throw a rock into the pool. How do you feel about testing? So, to Michael’s earlier comment of figuring out what the customer wants in the end – validation – where does hardware in the loop testing fit into that definition? Because we’re building these complex simulation rigs for, say, an automobile, you’re going to have people sitting in a simulated car, figuring out how much wind noise am I getting from this side mirror shape? Or how much rolling resistance is there on these tires? And how does that affect the driver’s overall comfort level? But you could also have other HIL processes, or very likely, if you’re doing MBSE, you’re going to have software and loop testing too. So how does this fit into continuous verification?
Michael Baloh: Well, there are a couple of points there. I mean, there’s a simulator. The simulator can take on all types of representations. You have Model-in-the Loop; you have Software-in-the-Loop. You have a whole pantheon of flavors of simulators. And on the other side, you have the tests. And I think what I’m trying to infer is what you’re going with, but let’s think about matching the tests to the simulators. If I’m trying to figure out if my design might be right, the intent of my design might be right; maybe I can do a Model-in-the-Loop. I have a piece of control software, perhaps, I have a piece of physics that’s simulated, and I put the two together, and I run it on my desktop or my laptop, and I can understand the intent of my design. I can do different tests there to see if the vehicle accelerates or the plane has the range I expected, and the maneuverability is there. And that’s about where I can take it. If I have more physics, I can start looking at noise. I can look at more complex phenomena. When Tim said, “There are all types of models.” There are. And it’s how much physics you are willing to spend on making models that allows you to predict a phenomenon and, therefore, do this kind of early testing. Does that phenomenon, like you mentioned, wind noise, is it too loud or not? Well, you need to do that. You need the right kind of physics.
On the flip side, why do we sometimes start Model-on-the-Loop and then we finally get to HIL and go into a car or an aircraft? And it goes to something else to mention as the design matures and you understand more, you can invest more time to make a more precise model. So early on, if you don’t understand your design well, there’s no reason to spend months generating a very complex CAD model or a CFD model that predicts the sound of your car moving through a tunnel or something like that and what the noise is. It’s just no sense in it because you don’t even know what the envelope of your car looks like just yet, or maybe even the architecture of the powertrain. And how does HIL fit into that? Well, HIL works because, again, it brings another level of fidelity that allows you to test.
So, on Model-on-the-Loop, which I mentioned, where you’re just checking if I built a model of my code and I built a model of my physics, does it simulate? Can I test? Yeah, you can test certain things. But with HIL, now you generate code from your software model, and you put your code on actual electronics, and you’re testing “Does my software run on the electronics correctly? Does it use too much memory? Do the electrical interfaces all line up to what I thought? Does the software know how to talk to the ECU?” All that stuff you tech within HIL. And HIL allows you to do that because it brings in the electronics part of the product. Every type of simulator you have out there brings in different flavors of precision about what the product should look like, which allows testing that part of the system in a product, which the physics predicts.
Jan Richter: These are excellent points, Michael. So, let me chime in here as you went towards the end to Electronic Control Units. So, let’s talk about embedded software running embedded control units. Back to your question, Nick, Testing is a vast term. So, a software engineer probably starts on his desktop in his IDE when running the first local software unit tests. And then, a bit of integration later, software integration tests. But then, coming to the ECU, the classical approach is to run that in a lab with physical ECUs where you need samples. And to observe what’s going on precisely in terms of timing inside the machine, you need to install probes, thereby disturbing the system to some extent. But, in the spirit of continuous verification and validation, we observe that people want to virtualize some of that as far as possible and still be able to analyze essential topics like timing, memory consumption, and whether everything fits into the box in the end. In a way, that doesn’t disturb the execution of the system itself. And that’s, therefore, more flexibly and also more scalably, available for a large-scale rollout. That’s a trend.
Michael Baloh: Yeah, I think what you’re referring to is this virtual HIL. And what I’ve seen and how our customers are using that is they’re taking advantage, most of the time, and you don’t need to. But, it seems like AUTOSAR is greasing the rails for this because if anyone’s familiar with AUTOSAR, it’s a framework for designing software in the automotive world. And in simplest terms, because it’s a framework – because I mentioned frameworks before – it allows us to do things like virtualizing the hardware. So, the customers are taking advantage by building emulators or simulators of their hardware, making essential software that runs on that simulated hardware. So, now they can move away from HIL – well, not entirely; there are some things you need to test on real hardware. But you can, again, front load, even the electronics testing.
Jan Richter: That’s right. And AUTOSAR supports these activities by standardizing the runtime environment, the essential software layers, and the interactions between the different software modules. So, it’s possible to represent a virtual ECU on varying levels of fidelity, going in the depth of the actual software stack being the final production edition versus a mock-up.
Michael Baloh: One thing I’d add, Jan, you said, and I don’t know if anyone caught it, but was, HIL, if anyone ever seen them, they’re hundreds of thousands of dollars for just the hardware. So, what it looks like it’s like a giant PC rack. It has all the special electronics that allow you to do some high-speed simulations of, say, physics. And then let that simulated physics talk to an actual electronic controller that might be in a car or an aircraft or a robot or something and give it the simulated sensor inputs and read the actuator outputs. And then put these ECUs, these the software programs into like The Matrix kind of – anyone’s familiar with the movies, but it’s like The Matrix; you put the software into this system where it doesn’t even distinguish it’s not in the real world. And the thing about these simulators is they’re costly, and there’s not all that many. So, even if you have a big, like one of our significant customers, they may have whole rooms full of them: 20, 30, 40. But even that is not enough. And, Jan, what is fundamental about the virtualization of HILs is that you can start running on people’s computers if it is fast enough. So, they can essentially run a virtual HIL, observing the electronics and the software behavior.
Jan Richter: These things could run faster than in real-time if the computing power permitted. In physical HIL, you need to ensure that your physical simulation runs exactly in real-time to match the ECU clock speed.
Michael Baloh: That’s precisely the point. And that’s a significant limitation of HIL that people don’t realize. But having said that, I mean, HIL is popular. It is the first thing most companies seem to adopt when they do simulation.
Jan Richter: And it’s necessary.
Michael Baloh: It is. It’s necessary. That’s one point; it’s essential. But also, it’s a bolt-on capability to an existing process. In other words, maybe it’s a company that doesn’t have a lot of strength in simulation and modeling, internet simulated testing. But you know what? They can open a HIL lab. All they need to do is a sample of their ECU and a few people to program the HIL, and they can start simulating with a HIL and testing upfront. And that’s like an effortless thing where most companies begin. But as they get more mature, they start thinking about virtualizing further and further to the left.
Nicholas Finberg: Talking from your point of maturity, we’ve been talking a lot about the different ways to simulate, analyze, and test all the models within your design. And we started at the very beginning with your system architecture. And now we’re continuing to, and I don’t know how far. Do you guys have an idea? Does it end when the product runs off the line? Does it continue through the lifecycle of the product? Or does it even continue to the next project? How far does continuous verification and validation go within the MBSE process? Or is that even an impossible thing to answer because of the number of possibilities?
Michael Baloh: Well, what comes to mind is the IoT.
Tim Kinman: That’s exactly where it goes, is IoT.
Jan Richter: It reaches well into operation, especially in the world of software-defined products and connected products that connect with IoT; we need to make sure that, for instance, the vulnerability of the cyber security of the software is all the time insured. And the threats evolve, so that doesn’t stop when the products are fixed for a start of production.
Tim Kinman: I think you’re walking into digital twin technology now because from the overall; the product is released, the product is being used, and it’s operational in its environmental context. The digital twin is continuing to evolve. It’s like learning. The digital twin is learning while getting information about how the product performs. And so, as the digital twin is learning, those models are also evolving. And as you make changes or updates, you need to have a way to verify those changes and those updates. So, this is a crucial part. Heavy lifting is done in the initial definition, development, and release. Yes, it’s important. We have a digital twin that mirrors the physical product when we release that. And we’re going to continue to evolve, mature, and refine the digital twin. But again, any updates that happened in operations need to be verified. And if changes are required going back to engineering, then those changes that come out of engineering-backed operations also need a way to be confirmed. So, it is continuous, not just in a lifecycle but also in an iteration of performance.
Michael Baloh: And just the practical example is over-the-air updates, where you see some companies, some of our customers monitoring how their products are being used, looking for incidences, analyzing those instances in a big server farm, and updating their software and reflashing into their product, keeping it up-to-date, secure, safe. And that’s an example of what we’re doing today. But I don’t know how far it can go in the future.
Tim Kinman: To be even more precise, this example holds whether it’s a car, windmill, or anything like that. I’m in operation, the product is telling me information about how it is performing, and then the operations; I’m also going to do verification because I may see one of the products is not achieving the performance targets. So, I’m going to do a verification within the operations activity to confirm by manipulating parameter values. Is that a problem? Is it a tuning configuration calibration issue with the product? And I’m verifying that element using the same models, the same digital twin models that are part of the release. So, it doesn’t always happen in engineering. It could happen in the field, but I’m using the models and verification to confirm in the field, in the operations aspect, that the product is achieving the right outcome.
Michael Baloh: And you mentioned “iterative.” I’m not sure if you meant product iteration. But there’s also a strong likelihood that as you see your product being used, you realize the intended use is not the same as you initially thought. And it opens up all types of possibilities. It might make you realize, “Wait for a second, our system is poorly engineered for certain use cases we didn’t anticipate, and maybe over-engineered for other use cases which we thought were important.” So, now the whole product optimization. In the next iteration, we go back to that early concept; we can start tweaking the design, trimming the fat that we have and strengthening where we need to strengthen, and making a product in subsequent iterations even more tuned to where our customers want it to be.
Jan Richter: I think these are great points, and the operational feedback, monitoring use case, is a perfect one. Being a control engineer, I, all the time, see feedback loops. So, I see feedback loops to timescales here. In hardware design, the cadence is significantly slower than in software. So, some of the feedback will impact and influence the software that gets developed, most likely according to agile development methodologies, process frameworks, heavy usage of CI/CD – so continuous integration, delivery and deployment. Other feedback will impact the more extended hardware design cadence and, therefore, impact the next product iteration, probably a couple of months or years down in the timeline.
Michael Baloh: Maybe I want to mention one other thing, and it’s not strictly a follow-up to what we were discussing. But I think it’s an important point that’s worth noting. Now, Tim mentioned modeling requirements. And we need to spend a few minutes talking about that. Typically, if you see a customer or engineer using Microsoft Word to describe requirements. Yeah, it’s good that people are doing the requirements engineering. But, still, they’re missing the advancement of being able to model requirements, for example, which means that now we can start representing requirements as these things are much more granular.
And because they’re very granular, we can use databases to say, “Okay, if I change a requirement, which part of my design – which is maybe managed by my database – has to be changed or inspected. And then also, which tests might we need to rerun.” So we can use these databases to trace what part of our design needs to be reviewed and redesigned? What part of our tests need to be rerun? And then, we can run those tests during the night cycle. And when we come in the morning, we can hit the whole thing again; “Okay, what changes do we need to make? What’s the impact? What do we need to rerun? What do we need to retest?” And we can keep doing this until our product literally goes out the door.
Tim Kinman: I’m glad you brought that up because we’ve talked about an agile view of continuous verification, but there is the management of that because somebody is always worried about are, verifying due to compliance reasons, due to safety or other regulatory? Have we ticked the box that achieved our requirements under the stated conditions? Do we have artifacts and evidence that tells me I’ve achieved that? So, many of the things we’ve been talking about so far are how we facilitate decisions and the quality of the engineering activity. But when you come to the verification management, you gave one example; I gave another one. It is all about having the management of the result to show that the product has achieved its stated targets, but also show in the cases of compliance that had done so within the regulatory constraints that the product is going to operate.
And then it also helps me understand when I make modifications or changes, and my decisions could have a different impact on the product; it may be that changing requirements seem easy. But then I’ll look at the effect of the verification and say, “Wow! I’m going to have to rerun a month’s worth of tests.” So, it may not appear to be the most cost-effective engineering choice versus the alternative two. But my level of impact and test would be 5% of that. So, it allows you to make business decisions when you start managing the results of verification and just not, again, the point of engineering. Still, in the total product view, verification management allows you to see that holistic view of the product’s ability to meet the stated outcome in a regulatory compliant way and understand the effect of change throughout the lifecycle.
Michael Baloh: Yeah, a significant point, and it’s there’s something worth mentioning when you talk about the business motivation. Why models? Well, models can be used to generate, databases can be used to create all that documentation and evidence and proof. So, not only are models useful for the verification and as a testing activity and engineering activity, but they can also be parsed and then used to drive a whole bunch of automation that creates our documentation, our proof for compliance. And that’s a big deal when you think about if you’re constantly making changes to your design because you’re very agile. So, having people chase that trail and continually update the documentation and evidence is hard; that’s why automation is needed, and that’s why models are so critical.
Nicholas Finberg: Well, this has been an amazing discussion. I didn’t even get to ask half of my questions, but somehow you guys got around to answering all of them. So, unless there’s something else we’ve missed that you’d like to talk about, I want to go ahead and thank all of you for joining me today.
Tim Kinman: Fantastic.
Siemens Digital Industries Software is driving transformation to enable a digital enterprise where engineering, manufacturing and electronics design meet tomorrow.
Xcelerator, the comprehensive and integrated portfolio of software and services from Siemens Digital Industries Software, helps companies of all sizes create and leverage a comprehensive digital twin that provides organizations with new insights, opportunities and levels of automation to drive innovation.
For more information on Siemens Digital Industries Software products and services, visit siemens.com/software or follow us on LinkedIn, Twitter, Facebook and Instagram. Siemens Digital Industries Software – Where today meets tomorrow