In this eleventh episode of the Model-Based Matters podcast series our experts continue their discussion of all things Model-Based Systems Engineering, focusing on SysML as one of the components to this methodology for verification and mass adoption.
Nick Finberg, Technical Writer for Thought Leadership at Siemens, is once again interviewing Tim Kinman, vice president of Trending Solutions and Global Program Lead for Systems Digitalization at Siemens Digital Industries Software, providing his expertise on this subject. Enjoy reading the podcast.
Check out the transcript (below) or listen to the audio podcast.
Read the transcript
Nick Finberg: Welcome to Model Based Matters, I’m your host Nick Finberg, one of the writers for Siemens Software. In this series we cover everything pertaining to Model-Based Systems Engineering, with our MBSE Strategy expert Tim Kinman.
In the last episode we wrapped up talking about the tools an engineer or system architect might use in their day-to-day, and it rang very familiar to a paper you had authored on updating MBSE for changes in industry. And SysML was brought up as one of the components to that methodology. Would you mind talking about what is changing, or why the change was needed in the first place?
Tim Kinman: Yeah, so SysML is a standard. It was derived from a universal modeling language standard. So if we go back to the earlier days of software engineering and the ability to reuse functions from a software definition, UML was a way that we could model those software elements, but it didn’t meet all the needs for the broader system. So, there was extensions to UML that were created with these additional profiles and they were represented through SysML. The challenge though has been that it is really not comprehensive enough, and so therefore there were a lot of extensions being made in the field, with a lot of tools, and even beyond the tools, the customers themselves had to extend SysML, which led it down a path of, in essence, becoming proprietary. So, the ability to exchange information was very difficult.
So what happened was OMG as a standards group came forward with an updated definition called SysML V2, which rather than being an outgrowth of UML they came at it from a fresh perspective and said, “What are the needs of systems engineering and how do we get not just to a graphical representation, but also to a textual representation that can be parsed, machine readable, that has a textual semantic that guarantees that I have an ability to exchange across the tool chain without losing any level of detail.” So, the value of what we see in SYSML V2 is it can become an interoperability as well as an authoring language.
And so, from that standpoint, we’re eager and optimistic about the value add to our customers because we’re now able to take these other users in the overall value chain that are not primary system engineers. Now they will be able to participate because that information that’s being authored is machine readable. It is a standard that can be exchanged across a variety of tools that can be exchanged across various personas depending on their role. Also, it is enabled for a broader value chain and partner community that wants to be engaged in the upstream for the shift-left system definition as well as the downstream engineering.
Nick Finberg: So, you hit a point on automation and machine readability, from my understanding of some of the other hurdles of companies today, that will be a game changer when it comes to sustainability solutions. There is so much data to collect, analyze, and act upon. And automating that collection and parsing of understanding will be critical to get more than data scientists onto the solution path. Are you seeing that as a priority with customers?
Tim Kinman: Well, I’d say everything’s moving to be data driven more and more because we are now at a location or a place in time where we are so sensor based and the pace of sensor based products is growing almost exponentially. I’ve read some quoted data recently from some of the analysts that talk about the growth of sensor enabled products, which means that all this data’s flowing back. And so the idea of being data driven is meaning that it’s no longer a wait for feedback, then apply that to my next product cycle. This is going to be a much, much faster expectation so that I need to be data driven, not just from the way I’m listening or interpreting the performance of my product in the field, but I need to be data driven in the way that I’m bringing that back into my engineering cycle. So in sustainability, it could be that we identify as we’re not achieving a certain target either around disposal, reuse, emissions, and we need to take action on that faster.
And so, what I want to be able to do is use that same digital twin that is related to the product and use where I am getting my performance feedback. I want to be able to use that data back into my engineering process to then optimize in the next product cycle. And that could happen fairly quickly, especially looking at the data being driven back into your thinking. It could be as easy as a software update is more likely. And so thinking about making a software update that could be rapidly updated over the air back to that product, that would be one way to think about it.
I would say if it doesn’t tell a physical engineering change that may have a different timeline, but it still allows us to be data driven in the way that we’re thinking about how we’re going to address the need of that market or of that particular response to the customer. So I think you’re picking up on one of the things generally as data driven is where it’s at. And that’s going to have an impact not just on how you measure your product, but it’s going to have an impact on how you bring that voice of customer or the voice of product back into your engineering cycle.
Nick Finberg: All right. So, I was coming into that question kind of just thinking about the design aspect, but you went ahead and extended that fully into refining your product even after the fact. the data first approach really makes sense for large businesses trying to work cooperatively within their supply chains and even within large business themselves. But how do you ensure the right information aligns with what is happening? Is there a process for that within the MBSE method?
Tim Kinman: It’s an extension of the digital twin. So, we already have a digital twin model of the release product. And that release product is not just the product standing still, it’s not just a 3D, it has to be a 4D. We have to have time in there and may be, when we think about the dimensional elements of the product, the time element may include the behavior. So it’s like a state machine or discrete event type of capability. So, the important part that we’re looking for is that the product insight that occurs of the release product of the digital twin is going to be now inclusive of not just the physical elements of the product, but it must be inclusive of the system elements as well, so that you can assess through operations if the product is meeting your targets. And so we’ll use some of these same MBSE models, system models, we’ll use those in the operational aspects of the product using the digital twin.
And we can also then use that same for dialogue with engineering. So that using that same digital twin model and its system representation, we can use that as a dialogue with engineering if there are follow ups and we use that also as a communication back to our engineering for next generation or general product improvement. So, the long answer, maybe if I shorten it as the digital twin is still critical and maybe even more so in that we need a way to represent that virtual representation of the physical, but it must expand to also represent the system elements that are driving the software contained on that product.
Nick Finberg: Ok, so you’ve kind of started the discussion with an emphasis on software and we’ve reiterated that again and again. That’s kind of a big driver in the complexity of some of these products. But how does software play into that continuous verification of something like a car or an industrial machine? You mentioned something about updating the software earlier in our discussion of sustainability, but what does that look like in other topics?
Tim Kinman: Well, the idea being continuous means that as early as possible in my product thought process, I want to be able to verify. So it could be in a conceptual and that we start there, I take the digital twin of my prior product and I identify points of innovation related to new requirements. And from there, I want to start doing verification as early as possible about how those points of innovation may impact other aspects of my product as well. And it goes back to our earlier points about balancing requirements that may span functional and non-functional, all of that, including sustainability targets. So I need to have a verification capability early so that I can evaluate those decisions before I make my commitment to the engineering of those product outcomes.
Now, when I drive that through the engineering cycle, I’m continually verifying that I’m still on path to meet those targets. So even though, let’s say it’s the same requirement in the concept phase, it’s the same requirement throughout the whole product cycle, but my fidelity of verification will get finer grained as I move through the development cycle so that I can ensure that I’m actually meeting the requirement and the set of requirements that’s been stated.
Nick Finberg: So maybe an example, very, very limited, and it’s kind of simplistic almost would be we want large wheels on this vehicle for better comfort. But the very abstract idea of bigger wheels will then get refined to, okay, we need them to be this size for different terrain environments that those vehicles going to see. Is that kind of where you’re going with that? Granted, it’s going to be far more complex with software and everything else installed too.
Tim Kinman: Well, it may be a little more complex than the wheels. So, think more about sensors. Therefore, let’s go back to sensor based. It could be that I have a particular definition of sensors and breaking distance and comfort. I’m trying to balance those together. And maybe I’m going to swap out and put a higher fidelity sensor that has a longer distance to it. I can see things further out with these sensors. I still want to evaluate the breaking element and the comfort that comes with it. So maybe with a finer grain sensor, it allows me to break sooner. Maybe that’s a good thing. It allows me to see that target and break sooner. So therefore, it improves the comfort of the individual. And so now I’m impacting multiple requirements that I want to evaluate what is the cause and effect of that. Conversely, I may have decided that the sensor that I’m using is really expensive and I want to use a cheaper version of that.
But the problem may be that it has a shorter distance to that, which then causes me to have a jerking effect on the occupant, right? So that may be something that I feel like I got to understand if I am able to bring in a lower cost sensor, am I still going to be able to maintain my breaking distance? Am I still going to be able to maintain the comfort of the individual in the vehicle? And those are the kind of tradeoffs and balances that we want to understand early in our decision cycle before we commit to an engineering action because downstream engineering change has become very expensive.
Nick Finberg: And you can even throw in more axes for balancing. It might be the higher cost sensor produces too much data for your SOC to be able to process it fast enough. Okay. So, we’ve talked a lot in this episode about how to implement MBSE practices and even some of the why in our previous episode, but I want to know in your view, what is needed to reach mass adoption of a methodology like this? Is it different for vertically integrated companies versus one that is kind of a large or one that relies on a larger supply chain network?
Tim Kinman: Yeah, I think the adoption is going to be multifaceted, right? I think that number one, it has to be open. From the history here, it’s been more of a closed type of environment. So, it’s been a specialist type of role. And so, I think one part that we push on and we think is very, very important is that we are open company. It’s an open ecosystem, and we need to have an ability for all of our system engineering applications to interact in this digital twin ecosystem. I think the other element is that it has to be easier to use. Like I said earlier, it’s right now it’s considered a set of applications for experts. And a lot of people participate in the decisions that are not experts. And so, we need to make sure that the context of the system, and some refer to that as the viewpoints of the system, that I can serve that up to people in a way that it’s not too complex, it’s not overwhelming for them.
I think ease of use is another key element of what we’re after. And I think generally the other thing that customers are also dealing with is that their organizational transition, the ability to have pervasive adoption of model-based system engineering is going to require not just technology, but it’s going to require customers adopting and adapting, changing some operations inside their own company. So as their owned workforce is turning over, they’re bringing in more people who know software engineering, they’re bringing more people who know system engineering the traditional ways of working, I think more in a, not so much a waterfall, but in a silo, a set of stacked domains is kind of breaking down. So some organizational change is going to be required in most companies that are making this kind of transition as well.
Nick Finberg: All right. Well, thanks so much, Tim. That was all my questions. Is there anything else that you want to leave the listeners with?
Tim Kinman: Well, yeah, I don’t know if there’s any one thing to leave with. I think it’s the need to either think about or begin to transition to system engineering is here. I think most customers are living through that as the products and operation or shifting the responsibility to software right away. Just by saying that, you’re probably in a path of transition to model-based system engineering. And I think it’s also, we should think about how we turn this into reuse like we’ve done in a historical, mechanical, electrical, the traditional physical space. We’re looking at software and systems as intellectual property of all these customers. And turning that into reuse is another element, another value of why the digital twin plays such an important role. It has to be represented and be part of your product intellectual property, because that is going to be as much of a differentiating value for you as your historical physical elements have been.
Nick Finberg: Okay. Well, thank you for that and I hope we get to talk soon.
Tim Kinman: OK, great Nick, thank you.
Nick Finberg: Thank you Tim and thank you to our listeners. I hope this was a valuable discussion. Make sure to check out some of our previous episode where we talk to experts from a variety of industries and roles. And subscribe to Model Based Matters for future conversations around model based systems engineering. Have a good one!
Siemens Digital Industries Software is driving transformation to enable a digital enterprise where engineering, manufacturing and electronics design meet tomorrow.
Siemens Xcelerator, the comprehensive and integrated portfolio of software and services from Siemens Digital Industries Software, helps companies of all sizes create and leverage a comprehensive digital twin that provides organizations with new insights, opportunities and levels of automation to drive innovation.
For more information on Siemens Digital Industries Software products and services, visit siemens.com/software or follow us on LinkedIn, Twitter, Facebook and Instagram. Siemens Digital Industries Software – Where today meets tomorrow.