Thought Leadership

The application of Model-Based Systems Engineering – ep. 6 Transcript

By Nick Finberg

In this sixth episode of the Model Based Matters podcast series, we learn the importance of creating models while designing new products. Also, we discuss what is involved in continuous verification and the role it plays in keeping the engineering team on track, learning the difference between product verification and validation.

We are joined again by Tim Kinman, Vice President of Trending Solutions and Global Program Lead for Systems Digitalization at Siemens Digital Industries Software. Also, we are privileged to speak with Michael Baloh and Jan Richter, also from Siemens Digital Industries Software. The experts today discuss the challenges that can be solved by automating tasks within the design process. Join us as we discuss the future of MBSE and the role that increased complexity will play in the world of electronics.

Read the transcript (below) or listen to the audio podcast.

Nick Finberg
Nick Finberg, Writer – Global Marketing at Siemens

Nicholas Finberg: Welcome back to Model-Based Matters. Siemens Software’s one-stop-shop for all things in Model-Based Systems Engineering. I’m your host, Nicholas Finberg. And I’m joined, as always, by Tim Kinman, our VP of Trending Solutions and Global Product Lead for Systems Digitalization here at Siemens. In the last couple of episodes, we took a bit of a detour to dive into the role of electronics development in today’s complex products. But we’re back to defining one of the significant pillars of MBSE: Continuous verification. Joining us this week are a familiar voice, Michael Baloh, from our episode on feature-based engineering, and Jan Richter, both of which are Controls Engineers here at Siemens. But before we plunge into the ocean of continuous verification and its solution set, would each of you mind telling the audience a bit of yourself?

Michael Baloh
Michael Baloh, Control Engineer at Siemens

Michael Baloh: My name is Michael Baloh. I’ve worked for Siemens now several years, but my background is in controls and systems engineering. Since I was in school, I’ve worked with models to design all types of complex mechatronics systems. And it’s almost certain that some of our customers or listeners have products which I’ve worked on that have software in it that I’ve designed.

Jan Richter
Jan Richter, Domain Director Embedded Software Components at Siemens Digital Industries Software

Jan Richter: Hello, my name is Jan Richter. I have been with Siemens also for a couple of years now. I’m in product management, working in the area of software. And my professional background is also in control, interestingly. So I also enjoy control and the interactions between control and software.

Nicholas Finberg: Like most of the solutions around Model-Based Systems Engineering, there are not singular paths to success but a group of similarly classified ones. And continuous verification is no different than the rest of it. So with that, I think it might be good to define some of the dimensions that form continuous verification, such as simulation, numerical analysis, product testing, and even business analysis.

Tim Kinman
Tim Kinman, Vice President of Trending Solutions and Global Program Lead for Systems Digitalization at Siemens

Tim Kinman: We’ve been spending a lot of time talking about system specifications, system design, going back to architecture, and continuous verifications are the same. The constant product starts a concept that we want to verify from product specification in the early requirements, operational, functional analysis. So, we need to have continuous verification very early as we specify the system. And then, as we walk our way into system design, where we’re starting to do the early architecture trade-offs, we also need the continuous verification connecting behavior to the physics, to the mechanics of that product. And then that runs throughout. So, when we’re talking about constant verification, we mean the combination of the lifecycle from concept to release. We suggest something in the effect of AGILE, meaning that concurrency, that continuous and repetitive verification of the solution. And to do that requires different personas and different applications across that landscape for achieving it.

Michael Baloh: Well, if I were to start explaining this to people, I would think about how I talk to my customers about this. There’s a lot of hesitancy. I mean, some customers ask, “Well, why do I need to do all this modeling? I mean, I have effectively done with PowerPoint and Microsoft Word for years.” So we need to start at the same ground level. MBD and MBSE are very different from earlier document-based approaches, where documents are at the person’s discretion to draw, create images, explain thoughts, and then build, design, and test, which can take weeks. And not to mention that documents, by themselves, if you have the liberty to describe anything as you like, then there’s always room for ambiguity and misunderstanding. And when we go to something where we use the language “models,” and how do models ultimately derive this whole verification validation. Models are not like typical documents.

They can be in a document, but they have semantics and rules to build the model. And it’s those semantics that allows us to make tools to analyze the models, to see that they’re correct. And anyone who’s ever tried to build, for example, think about how when you build something with software, like code, anyone who’s been to school had to take a coding class, you know, you try to create a little “hello world” thing, and you try to compile, and it doesn’t work. And that’s an example where just from the get-go, using a language with a standard allows a tool to parse the language and check that you’ve built it correctly. And so that’s a very early instance of verification. And then everything builds on top of that. I don’t know if that makes any sense, but I think it’s a good place to acknowledge that models are based on languages, and then the languages help us go from there.

Tim Kinman: Well, I think that’s the key point. Why do I build models? I don’t build models because I want to document an outcome. I build models because I want to simulate along the way. The key to modeling is the ability to execute those models, have an executable model that confirms the hypothesis, enables various alternatives, ensures the right outcome either from the engineering of the mechanical or the performance outcome. It allows me to simulate using models throughout my thinking and development processes.

Michael Baloh: We think of models too. And you mentioned simulation a lot, but I don’t want to orphan the sibling, which is analysis. When you build something semantically correct – so it has rules in a language – you can now start to parse it, and not only do things just like basic semantic checks, but now you can go to another level. You can do an analysis. In software engineering, I know – that’s where my background is in controls – they have ways of representing a software design, a model, as a graph. And then you could trace your path on the graph, and you can ask really interesting questions about your design, like, does I have dead code?

Or I can use the graph theory to check if my system is secure and does it respect all the requirements for which it was designed? You can understand how that would be useful because we’re developing new control systems today. if you can analyze, not even using simulation, walk your control design, and say, “Is there a situation where a person could input a signal or an input something into the system where it gives them admin rights?” You can check those things. That’s part of the whole process of syntax tree analysis. And so we have to remember too that alongside simulation, which is very important, this analysis, one, is also very important. Especially for the architecture modeling, where you don’t necessarily simulate but you do want to check; “Is all the ways I view my system in a system architecture model consistent with itself?” And things like that.

Nicholas Finberg: That’s a lot of different kinds of models just in the very beginning of the development process and system architecture. Are there any specific models more relevant to a continuous verification workflow? Or is it more of an umbrella that everything, simulation and analysis, is something that will eventually fit under this framework?

Michael Baloh: When I think of continuous integration, continuous delivery, sometimes called CI/CD. This is where I see many of our customers pushing the edge. They are like the more mature customers in simulation, controls and mechatronics. And you can go out there, and people can read about what they’re doing. I can tell you a bit about how they do it. It kind of ties back into why I initially mentioned semantics and the rules to build models. And I said mature customers because what happens, I think, in this process, people start building models. They start having a lot of models. This is the normal evolution on their walk to becoming MBSE-smart. They create a lot of models. Now, they have a lot of models that they look at, and they start noticing patterns between the models. And they’re like, “Wait for a second here. We can combine models, and we could start breaking them down at the libraries and start architecting them into frameworks.”

And now that they start building these conventions by which models can be broken down into parts shared to many people or different organizations or collaborators in various businesses, and then brought back together and reintegrated based on those conventions. So, now they can do continuous integration and verification throughout the lifecycle because they can have people building things early in the design and later in the design. They get results back. They can use automation to assemble the simulators. They can reuse automation to run batches of simulation daily. Because they’ve got the experience, they could build automation and even review the results and pass and fail the running test cases. So, that’s an essential keystone of it as the customer matures. They start understanding these patterns about how they are working with models and how they can be broken down, building up frameworks, and using automation to drive this giant engine of continuous model building, continuous testing, continuous verification, and validation. 

Tim Kinman: There are different models, though. We can have requirements models, and we’re verifying that I have represented my requirements effectively at the beginning. I could have a functional model with allocated requirements to those functions. Using those models, I want to do a different type of verification that I have the appropriate allocation for needs to the functions, and I must mention function to the requirement. I want to go through a state machine to confirm the function change. So, in those models, I’m also doing verification. Then I have to also think about the fidelity of the models. Because as we go through verification, we may start with reduced-order models and say, “Good enough.” For what I’m trying to do initially, reduced-order models are good enough. And that allows me to do the verification at the appropriate level of fidelity of where I am in my decision process.

But of course, as I get to further in my development cycle, as I get close to release, I expect the fidelity to be more relative, as close as possible, to the physical representation. So I truly can verify that I want to achieve the outcome. So, Nick, to your question about model types, yes, there are different types of models. We also need to recognize those models need to talk to each other in a verification process. I need to understand how I can connect the models for behavior to the models for physics. And I also need to realize across the lifecycle that I may start with low fidelity, reduced-order models. In the beginning, they’re good enough. But as I progress, those models themselves will also improve fidelity through release.

Nicholas Finberg: Okay, cool. Jan, do you have any thoughts on this?

Jan Richter: I’d like to go back to the term continuous verification and validation; what does that mean, and why do we do it? Especially when recognizing that verification and validation have been around for a long time, essentially, almost forever in engineering. So, the goal is to fail as fast as possible instead of failing late with the design. And we achieved that by shifting left the checking of the adequacy of the design. And as Tim and Michael explained earlier, I also think that we achieve that by formalizing and giving precise semantics in an analyzable form to engineering designs as early as possible in the lifecycle. So, in conclusion, such continuous V&V spans all stages in lifecycle models and should touch every model built to create the system rather than document the system, thus analyzing upfront whether the design is consistent and in line with the objectives and why the system is built. So, it’s all about risk reduction and avoiding late surprises.

Michael Baloh: And I want to mention something. We’re using the word “verification” a lot, but I think we also need to include “validation.” And there are different levels of validation. But if we think of the difference between Verification and Validation. Verification is: I want to check that design that I intended to build is correct to the design. So, I created what I intended to design – that’s verification. And validation is: This is really what the customer wants. The customer desires a product, and I create a product that matches their wants. But what I would like to stress is it’s too easy to say simulation is just about verification. It doesn’t jive precisely with my own experience. I believe I’ve learned a great deal about how I would want to use a product, the use cases, and the requirements for that product, using analysis and simulation. In that sense, I think this separation between verification and validation becomes a little blurry to the engineers using a model-based approach because they start to see and understand the product in its context. They can be more informed. They don’t just design what they’ve been told to design; they start being more interactive. I think that’s one of the reasons why products can be more innovative, and technically, the designs can come up faster.

Tim Kinman: If we describe continuous verification as saying it’s ongoing analysis, it’s a continuous simulation, it’s continuous verification, it’s constant validation, it’s a continuous test; it gets pretty long to say that over and over. So, we say continuous verification, but we kind of build into our meeting that along the way, it’s all about virtually and digitally being able to answer every point – we want to de-risk and ensure that the outcome is what we expect. That’s really what we’re trying to do. And the earlier we can do that, the better. I think the challenge we always have is we do mix these words. What do we mean when we mean analysis? What do we mean when we mean verification or validation or simulation? They intertwine. And I know they have specific definitions. And Michael, you walked us through that. But the distinction starts becoming so subtle that we understand what we’re trying to accomplish. I always look at the front loading part more from the analysis because, yes, I’m doing verification or simulation; I’m doing those things. But, still, on the front end, in de-risking, we ensure agreement on what the product is intended to do. And that’s what we’re doing – the analysis of my options, the verification that I got those choices right, and moving that into architecture. And we do the same thing; we start analyzing my architecture choices, and then I verify that that architecture will meet my parameters, my targets, and so on. And we’ll use models for a certain period, and then we’re going to move to a combination of models plus an actual outcome from development, whether it’s software or hardware. We’re going to start connecting those through that whole period. 

And so we’ve just called it continuous verification. But the idea is not to get hung up on the dictionary definitions of verification, analysis, simulation, validation. But in the spirit of what we’re trying to do, explain that from the entire lifecycle. That’s really what we’re trying to do. So, I think it’s good that you point out the distinction because maybe not all the listeners understand that. But at the same time, we’re trying to get across is the ability to execute things early and progressively execute digitally throughout the definition and development through release. Understanding that there’ll be different models developed over that period, and those models will talk to each other over that period as the product definition, the product development, evolves. But eventually, the point is that when I get to the end, and I am using that product in this environmental context, it does what I set out to accomplish. So, that’s, I think, in the big picture, what we mean and what we’re trying to represent by this whole continuous verification discussion.

Nicholas Finberg: Okay, thanks for joining us in this informative discussion; we will continue this conversation with Tim, Michael and Jan in our next podcast. 


Siemens Digital Industries Software is driving transformation to enable a digital enterprise where engineering, manufacturing and electronics design meet tomorrow.

Xcelerator, the comprehensive and integrated portfolio of software and services from Siemens Digital Industries Software, helps companies of all sizes create and leverage a comprehensive digital twin that provides organizations with new insights, opportunities and levels of automation to drive innovation.

For more information on Siemens Digital Industries Software products and services, visit siemens.com/software or follow us on LinkedInTwitterFacebook and Instagram. Siemens Digital Industries Software – Where today meets tomorrow

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2022/02/09/the-application-of-model-based-systems-engineering-ep-6-transcript/