Thought Leadership

The application of Model-Based Systems Engineering – ep. 3 Transcript

In this third episode of the model-based systems engineering (MBSE) series, I am joined again by Tim Kinman, Vice President of Trending Solutions and Global Program Lead for Systems Digitalization at Siemens Digital Industries Software. We are also talking with Piyush Karkare, Global Director for Automotive Industry Solutions, Michael Baloh, Control Engineer at Siemens and Brad McCaskey, Portfolio Executive at Siemens.

These experts today discuss the importance of product definition in representing the design to enable modular decomposition into relevant workflows and what these modules look like coming into an engineering department.

Please learn more in the transcript (below) or listen to the audio podcast.

Read the podcast:

Nick Finberg
Nick Finberg, Writer – Global Marketing at Siemens

Nicholas Finberg: Thank you all for joining Tim and myself for today’s discussion. So far we’ve talked about a high-level look at what MBSE is, and the growing importance it has for complex products. We’ve talked with a few experts about where to start the development process by defining system requirements or the product definition. Joining us today are Piyush Karkare, Brad McCaskey, and Michael Baloh. Could each of you tell me a little bit about yourself before we dive into today’s topic of connected engineering?

Piyush Karkare: This is Piyush Karkare. I am a Global Director for Automotive Industry Solutions here at Siemens. I mainly look after all the different solutions that we can build for automotive industry given the wide portfolio of Siemens that we have here. My background is in electrical and software engineering. Although by trade, I’m a mechanical engineer, I’m kind of a ‘jack of all, king of nothing’ kind of thing. Mechanical, electronics, electrical, and software – so, I’ve been to all of these different domains.

Piyush Karkare
Piyush Karkare, Global Director for Automotive Industry Solutions at Siemens

Michael Baloh: My name is Michael Baloh. I work at Siemens and I’m a practicing control engineer for about 20 years. I developed control systems that are in material handling systems and manufacturing, and also in automotive and some mobility applications. Presently, I’m responsible for defining the controls engineering strategy in our company, DISW.

Brad McCaskey: Hi, I’m Brad McCaskey. I’m a Portfolio Executive here at Siemens. I focus on Model-Based Systems Engineering, all the way through the process for requirements, through engineering into after-sales, into manufacturing all the way, and then a round-trip back. I’ve been in design automation for about 40 years. I started my journey in Model-Based Engineering in 2005 as we started on body controllers when they started to get more complex. So, I’ve been along the journey in Model-Based Systems Engineering for quite a while.

Brad McCaskey, Portfolio Executive at Siemens

Nick: Well, awesome. In our last episode on product definition, we left the audience with the note that the importance of the definition is to represent the what and the why of the design to enable modular decomposition into relevant workflows. What can these modules look like coming into an engineering department?

Tim Kinman: So, maybe to get the ball rolling here, we start with product definition. But we also end with architecture, having a set of interconnected information that represents the overall system architecture containing all the requirements, functions, and the overall behavior of the what and the why that we’re trying to achieve. And we have this cross-domain audience and speakers on the phone today talking about the interrelationship; how we take that system architecture, and not only try the initial technical feasibility, but start coordinating and concurrently collaborating the overall feature-centric development process. To get it rolling, we should talk about what is a feature? What does it mean to be feature-centric?

Tim Kinman
Tim Kinman, Vice President of Trending Solutions and Global Program Lead for Systems Digitalization at Siemens

Piyush: So, this feature-centric method or mentality started probably 2007-ish, overall in the automotive industry, to recognize that the way the vehicles are being sold and the way that customers pay for the vehicles, a lot of monetization is coming in from what they pay for, and more customers started to recognize these features, like lane assist or power mirrors. And on the other side, OEMs have been trying to figure out how to reduce engineering cost – how to track engineering cost while the complexity going into these vehicles is exponentially increasing. So, up came this concept of feature engineering, where everything and anything you do should basically align to a feature that either the customer touches or uses, or it is something of an engineering importance, if you will, something like a torque management or battery management, where it is very relevant at a vehicle or a platform level – what the customer touches. So that’s what feature became.

Michael Baloh, Control Engineer at Siemens

So, anything and everything you do; all the requirements or models or parameters or interfaces; everything should be counted in the container, calling it as the feature. These features are becoming more and more complex. If you think about, let’s say, the start/stop button, for example, that you touch your finger on that button and the car starts – that one feature typically touches 20 different ECUs or 20 different computers on your vehicle, and they start to talk to each other and that is how the car starts or stops. So, if you think about that, how do you make sure that that feature is engineered the right way, that the complexity that goes on to all of those 20 things. And by the way, those 20 things are not just doing start/stop, they’re doing 15 other features as well. So, if you look at the complexity that is increasing, that is where this whole feature engineering or feature-centric process starts to evolve.

Nick: Awesome. So, a customer might be looking for push-button start, what sort of information comes along with that? And how do teams start to work around that goal of creating a push-button start for a vehicle if there are so many extra ECUs that it touches almost instantaneously?

Piyush: Exactly. So, that’s what I think Tim was saying as to how do you capture this in the first place in what the features should be doing, and start to understand what functions are needed to do that feature? And then begin to allocate those functions to — logically speaking, who’s going to do that function, whether it’s going to be an activator, or a sensor of some kind, or some processing unit. Then allocate the physical parts, and you start to see how distributed that particular feature is going to be. In doing this, you see who’s going to do what in terms of domains as well. So, for one feature, it’s going to be some pieces that are going to be done by electrical. There is going to be a lot of software that’s involved in that feature – mechanical aspects like braking or steering, in terms of autonomous systems. These features could be spread across the domains, breaking it down into functions and who’s going to do those functions, what parts we need for those logical components. Then, cascade them down to the respective domains: electrical guide, affiliation to the feature, and so on. You begin to get the full picture from the top-down. And from the domain side of it, everybody gets to see the same system intent to fuel the feature it is trying to accomplish. So everybody works towards the same common goal in what that feature is about.

Tim: Let’s explore that a little bit more. Because as I was branching off of architecture, the system level, we’re now getting into what you introduced correctly as now down to EE and software architecture. And that’s an important distinction when you start thinking about the different approaches to solve these functional problems. So, Brad, tell us a little bit about how people would approach this decomposition that we’re referring to in an electrical or software architecture.

Brad: To build upon what Piyush said, we take the function or system models that are created based on the feature requirements. We decompose those down into what is needed for the electrical, electronics, and software space. We then allocate those system models into the actual place that’s going to go into the vehicle, into an ECU, a zone module or some type of controller, and assess the impact on the performance, networks, software performance, and possibly the electrical and the wiring. We want to understand how to architect the platform itself – the vehicle platform. Then you release those requirements into each of the design domains for their requirements that they have to manage into the physical implementation. This is very important because as you get deeper into the physical domains, one engineer could make a decision on something that can impact the other domain, especially in this feature-centric type approach. So, traceability comes back, it’s very important. So you can go back and say, number one, “Did you meet the overall requirement? But also, did you violate any cross-domain type of engineering that’s going on as you get further into the integration of the vehicle?”

Tim: So, it seems architecture is really important here as well because we moved from the what and the why – breaking down the functional elements that we’re trying to pursue in the product definition. But now you’re thinking about, “Okay, how do I achieve that? How many centers or what fidelity of the sensor? What latency in the network? How many ECUs are distributed?” Because as Piyush has pointed out, we could have hundreds of individual elements to achieve that push-button start. So, when you’re talking about EE architecture, you’re really thinking through the multiple approaches that could be possible, whether it’s mini ECUs, zone ECUs, what’s the latency of the signal? You have to consider all those elements.

Brad: That’s correct. You hit it right on the head. And especially, as we move into electric vehicles, you start to look at more things like power management. And when you place those certain functions into certain zones or into specific ECUs, this is going to require a certain amount of power. And as we try to do software Over-The-Air update, so you need to make sure in the future that you have that power management capability there for the particular zone. Therefore, as the vehicle starts to evolve, these are all the things you need to think about early in the architecture phase.

Michael: When you think about the EE architecture, you also have to think about what’s parallel to it, which is the software architecture. So, the EE architecture and the software architecture are two parallel architectures that coexist. The software architecture has to be compatible, and the electrical and electronic architecture has to be compatible to software. And so there is this problem of deciding whether functionalities are going to be electrical. Are they’re going to be software? And you make some kind of assessment. But then you have to start binding the software to the electronics. And that becomes a really tricky problem when we place a piece of software onto the electrical architecture somewhere on one node or another, with concerns about the latency. We also need to address the computational unit performing the action in a timely way? Furthermore, if there are two functions, which are tightly connected together, and addressing the latency of the network, there is still the issue of safety; what if there’s a network issue and a signal is delayed? Which can happen. So, there’s these complicated dynamics between two architectures that are coupled together. And I think that’s a very interesting issue that people don’t think about when they look at these systems.

Tim: Michael, you opened up another topic, delving into the complexities that you’re defining, and that’s the interfaces between these things. It’s not as simple as a geometric mounting point or something that’s kinematic. It’s now dealing with electrical and electronic signals passing across the network. So, the interface definitions become crucial, and basically represents the contract that will enable this push-button start to enable the results. So, let’s talk a little bit about interfaces that get defined through this architectural approach.

Michael: When you look at a system, you can start the problem out very simply. You could say, “okay, maybe in my most basic type of interfaces, I just have a list of the parts or things.” But then when you get a little bit more advanced, you say, “Well, I don’t need just a list of bill of functions, what I need to know is what are my input and output signals that are shared between these different functions?” And then you might get a little bit more advanced and say, well, “There’s also parameters that I can use to program all of these different functions so I can coordinate the way they’re configured.” Then going a level above that, you say, “Well, that’s not enough because these devices have behavioral constraints that need to be respected.” So you can have a level of contract, and then a dynamic kind of interface above that. It is a fascinating problem of how far you can take this idea of a contract on an interface. There’s much research in this, but I think there’s more learning ahead. Because when we get that down really well, we’ll be able to break up these problems and analyze how to assemble systems. The end goal of features is to be able to break down a problem and reassemble it. So, I think there’s a lot of future potential there.

Piyush: Just to add to that, one thing that obviously the industry is driving or forcing all the engineers to do is time-crunching. They want to do these features fast and furious, I guess – no pun intended there. But this is something that used to be done in years, and they want to do it in months or even weeks. So, how do you do these kinds of new features or modifications to features, and build these features, functions, interactions or interfaces between the functions and validate them? If you connect these five functions this way, it will actually work. How do you do that upfront and speed up that process? Later, as the interfaces become more detailed, there is the issue of whether it’s a signal or if there is a signal, what kind of encoding, and what kind of parameter it carries when it gets triggered. Can you assess the effect of all these things upfront? It is shifting towards the left of the engineering V, if you will, to do those things upfront so that you can basically expedite your feature feasibility studies.

Michael: When you think about this simple example, when you turn the key on your car or press that button to start the car, a cascade of signals start going through networks in the car. And a signal will go from one ECU to the other, and there’s a certain window of time where the logic is done in that destination ECU. It does calculate some basic things such as, “I’m awake, and I have no issues, no faults.” And it sends back a message to another ECU that says, “Okay, that ECU is working, it can function properly. Then, what about the next ECU?” And if you don’t get the timing correctly, it’s like a clockwork that gets jammed. And there have been many situations where customers of ours have literally had cars stop on the line. Because all these cars are so unique – every configuration of a car is different – the car comes to the end of the line and somebody tries to start it, and it won’t start. And after weeks of troubleshooting, they realize, “Oh, we misconfigured a feature that was intended for the North American market or another market of the world where they didn’t have a functional safety requirement in place.” And so this other ECU that was expecting to receive a signal from another ECU, didn’t get it, and of course the car won’t start. And that’s the kind of real challenge. It’s one thing to just say, “I’ve got all the signals that I need.” But it’s another thing to get the sequencing, the clockwork of all these signals correct. The idea of being able to do that up front, not just verify afterwards, but to be able to design systems knowing that they can be plugged in together and that clock works. I think, this is a leading edge of technology,

Nick: And then being able to continuously check that throughout the entire development process to make sure that any other change later doesn’t impact the overall function.

Michael: Well, that’s interesting. That is actually the contract that Tim was alluding to. The idea that you could design an interface and apply a contract, and these signals have certain characteristics and behaviors, so that when you put them all together, all the contracts are in agreement; the handshakes are there; it’s like a business that can run.

Piyush: And that’s very important because typically in auto, it’s not a one-time deal. These functions or these features continuously get modified, continuously get enhanced. So if there was no contract between function A and function B that you will transmit a signal, and I will be receiving it, and the time that is needed to do – the wait time, if you will, is less than 300 milliseconds. Then, two years down the line, if that function changed or added something in that function, instead of giving that signal in 300 milliseconds, it gets delayed. Let’s say, the processing that is needed for function A is now 400 milliseconds, and the function B is still expecting that signal at 300 milliseconds. So, now there’s a disconnect unless you have these kinds of contracts. And the system, basically, highlighted that these contracts are getting violated because function A is now delaying the signal by 100 milliseconds. Those kinds of things routinely happen, and with tools like we have, these contracts are in place so that these things can be highlighted and arrested in time, upfront, so that the issues like these don’t happen.

Michael: If you’re a company that sells cars today, and you want to be more dynamic, you want to really be more like a software company and agile, then Over-The-Air updates are a pathway to revenue. How do you confidently update somebody’s software? And if you do that and break their car, what do you lose in the customer’s confidence? Think about that. On one hand, you’ve got this huge carrot, which is this desire to sell a car that can be updated over time, there’s revenue potential in upgrading your car, adding new features and functionality. It’s a very exciting idea, customers could buy a car just for that reason. But then later on, when they buy something from you and upgrade, they can’t start their car, or strange things happen, or it becomes unreliable, that could be really come back to bite you. So, this idea that you have mechanisms in place, which is the original thing, what is feature engineering? Feature engineering is about breaking up that problem, and then creating a set of contracts and business mechanisms that you can do these Over-The-Air updates. And you can do many other things as well, that’s just one example.

Tim: So, coming back to the connected engineering, what you guys are talking about is that our traditional approaches of departmental views of domain disciplines no longer apply. You can never design or engineer what you’re talking about in a traditional way. And I think what we’re all reinforcing is models do matter, and that we need that upstream definition from product definition that articulates the what and the why, because that tells me behaviorally from a requirement. It basically gives me the input into the outcome we’re trying to achieve. But then we go further, because in order to connect the engineering teams together, we need architecture at the EE software level that starts defining how those things will be engineered. And that includes not just the function allocation but also the interface that will be necessary in a digital world. And then you follow that with the need to continuously verify a continuous cycle to make sure that each team stays on point to that end result. And so this whole path of connected engineering is just dramatically reinforcing that in the software-driven, autonomous type of products, staying connected through a model-based approach. Otherwise, there is no way to verify that you’re still on path.

Brad: I agree with you, Tim. And that’s why the industry is already making dramatic shifts to virtual hardware in a loop, and these other ways of doing verification and integration much earlier in the process. And throughout all the different levels of fidelity of software, it matures throughout the process time. There is also the complication that you may be adding third party software that needs to be integrated into your vehicle platform. So, there are so many different aspects to this that make it complex. The tools need to be there and be linked and integrated to help the engineering team out, like you said, the feature now is the centric piece of it that drives the whole process – it breaks the silos down. So, a feature engineer, or an owner of a feature, will have all these domain engineers associated to him so he can make sure that he is not violating those cross domain aspects, and he can start to integrate and validate much quicker into the vehicle.

Tim: So, we have only been talking about one feature. But in a current vehicle, how many features are really represented? It’s hundreds, right?

Brad: Oh, yeah. And that’s also why we could go on about the explosion in the singles in a car today – magnitudes greater.

Piyush: Normally, there are hundreds of features, Tim. But if you think about any given OEM, obviously, they’re not making just that car, they’re making multiple vehicle programs. So, from their perspective, from the OEMs perspective, it’s not hundreds of features, it’s about thousands of features across their platforms. And when you add a feature or a modification to a feature, what you really have to ensure is that the feature is going to work in the context of these other features as well. Secondly, is the high systemic organic reuse of functions that are needed for this new feature – how do you do that? When you have thousands of features that you need to worry about, how do you make sure that you’re not reinventing the wheel – that you’re reusing the functions that are already available on your feature architecture.

Tim: Well, auto park may be a good example of that, where today, when you see the commercials, almost every vehicle has some type of auto park, but it’s really combining a series of existing features that have been used for safety as well.

Piyush: Exactly. That whole reuse aspect. It’s a big thing because, obviously, they want to reduce engineering cost overall. So, how do you not do new functions, but use existing functions and innovate new features using those existing functions? That’s a big push. But unless you have some sort of a feature architecture, you just physically cannot possibly do that.

Brad: As we know, in the car companies, as you said, they like to use reuse because it reduces the cost. You may have multiple devices that could support a feature or a set of functions. So you can always go in now and look at which one is best suited? Which one do I have to modify the least to support and continue or increase volume of that device? There are other ways you can look at this feature engineering to minimize the actual component costs of the other vehicle.

Piyush: That’s a good point, Brad. I mean, that’s another significant trend that is in the industry today, where what I would typically call hardware consolidation, where instead of having 70 ECUs on a vehicle, they want to go down to four and call them Domain Control Units, where there are four different computers. So, everything, all your 70 ECUs are now consolidated into four. But now the problem is twofold. One is, are those four DCUs, hardware-wise, capable of doing the same or more features? That’s one aspect of it. And the second is all that complexity that you had on the hardware, now you all have that complexity, and shift it to software. So, now software has to do everything that these other 70 ECUs were doing, and be able to map to these four Domain Control Units. So, that’s another big problem or big challenge that OEMs are having, how do you go from point A to point B, from a 70 ECU to a four ECU?

Tim: And then we add a supply chain that is a very important part of each of these product companies as well, where the supply chain needs to deliver back the physical parts of it, along with software deliveries.

Brad: Yes, that’s a very good point because many Tier 1 suppliers are receiving RFI for the zone modules or zone controllers. And they’ve come to us saying that the biggest thing they want to do is architect their product to support many OEMs – not just one. So, they may overbuild their component that’s going to go into the vehicle so it can support many and volume increases. They’re looking at just almost the opposite way, coming in and they got the same problems; they need to architect their solution to target many OEMs not just one – the way it used to be in the old days.

Piyush: Going down to sort of the system or chip kind of thing, before you design your system on chip, or the hardware itself, you need to validate. For example, let’s say the vehicle-level features that are going to be implemented on that chip before you fabricate it. However, once you start fabricating it, there’s a lot of problems that we’re seeing right now with this chip shortage. But what the others are doing is trying being able to have capabilities around designing these features, but pre-validating them on how those chips are going to be, before the chips gets designed and fabricated. Then, they know the entire supply chain of how that feature is going to be implemented down to the transistors that are going to be on that chip.

Nick: Or. even understanding that the engineering work that goes into those chips is worthwhile in the first place – that your idea is going to work.

Piyush: Yeah, validating vehicle-level scenarios on a chip that doesn’t exist.

Nick: What a world we live in.

Michael: Well, there’s a need for this. And the need is: lower cost, lower power. So, if you pull out your smartphone, there’s a lot of custom silicon on that smartphone that allows it to be at those 15-hour lifespans on the one battery charge to do very specific tasks. And that’s where it’s actually funny, you know, I said electrical or electronic architecture, and software architecture, but in reality, it’s blurry because now we don’t make a decision: “Is it software? Is it electronic?” We can actually go a little bit in the middle like with these FPGAs and programmable units where we can say some parts of the functionality are going to be programmable hardware. It gets even more interesting, and more complex, but that’s a topic for another day.

Nick: In fact, that’s going to be our next episode. It’s going to be a good one to listen to if you’re interested in systems to silicon.

Piyush: The key thing that Michael touched upon briefly was safety, and obviously, some portion of security. What we’re trying to do is break that safety down to these hardware-levels or the other dungeons of the implementations, if you will, so that nobody can get to it from a security standpoint – nobody be able to hack it down. Bring down that redundancy at their hardware level, build that security and safety, so that when you’re implementing applications, you’re basically just executing a certain level of logic. But from a safety and execution perspective, it is going down to the hardware level and nobody can touch that part of it.

Nick: Okay, guys. Thank you so much. We’ve gone through so much information in an episode. Any last thoughts before we sign off?

Tim: Well, let’s spend another five hours talking about this. I think, clearly, we need to talk about this all day.

Nick: I’m sure our listeners will love it. Well, thank you, guys.

Piyush: Thanks a lot, Nick, for giving us an opportunity of recording this.

Tim: Yeah. Thanks, Nick.

Nick: Thanks for joining.


Siemens Digital Industries Software is driving transformation to enable a digital enterprise where engineering, manufacturing and electronics design meet tomorrow.

Xcelerator, the comprehensive and integrated portfolio of software and services from Siemens Digital Industries Software, helps companies of all sizes create and leverage a comprehensive digital twin that provides organizations with new insights, opportunities and levels of automation to drive innovation.

For more information on Siemens Digital Industries Software products and services, visit siemens.com/software or follow us on LinkedInTwitterFacebook and Instagram. Siemens Digital Industries Software – Where today meets tomorrow

Nicholas Finberg

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2021/10/27/the-application-of-model-based-systems-engineering-ep-3-transcript/