Thought Leadership

How AI is optimizing factory maintenance transcript – Part 2

In a recent podcast, I spoke with Dr. James Loach, head of research for Senseye Predictive Maintenance, about their applications of AI, and the importance of AI decision support systems. Check out a full transcript of that conversation below, or listen along here.

Spencer Acain: Hello and welcome to the AI Spectrum Podcast. I’m your host, Spencer Acain. In this series, we explore a wide range of AI topics from all across Siemens and how they’re applied to different technologies. Today I am once again joined by James Loach, Head of Research for Senseye Predictive Maintenance. In the last part of this series, we talked about the ways that Senseye is applying artificial intelligence to the problem of predictive maintenance and some definitions on what exactly predictive maintenance is and the ways you can apply AI to it. But now I’d like to change tacks a little bit and look at something else you mentioned in the previous part, which is the idea of AI as a decision support system or a kind of a recommender system. Can you give us some more details on that and what that means in the context of predictive maintenance?

James Loach: Yeah. We think of this in terms of context. Senseye is operating in this low context environment. The idea is people have got some random sensors on their machines that they can connect up those data streams. They see the data in the app and the application should do something sort of useful and sensible out the box and then interact with the user, learn what’s going on and what matters and do better over time. But we call that a low context environment because there’s no information about these machines. Maybe they would tell the app what the machine is, right? It’s a conveyor or something. There’s no detail, like I said, physics or engineering information. But you can imagine building systems and people do that require you to put all this in, but that’s a different kind of system. That’s something that’s very heavy to set up upfront, requires effort and specialists and so forth. It’s sort of slow, expensive and heavy. And then if you’re trying to monitor your 1000 random machines in your plant of all different types, you’re not going to do this.

And so we’re at the other end. So we say, “Okay, let’s not do that, right?” We don’t make all these demands of the user and all of this complex expertise, but then you have to provide value without that. And then how do you do that? And then it just has to be a decision support system. It’s a system that has some kind of built-in priors. It does know the very general terms, the kind of things that maintenance engineers care about, a trends for example, or stuff like this, some basic notions and then algorithms, which starts looking for these things in a flexible way and then just learns from the user what matters to them and what doesn’t. And then it’s a balance because if the user wants to provide us with specific information, we do try to make it easy for them to do that. So they can set rules and thresholds and all kinds of specific stuff if they want and configure the algorithms in detail. But we just don’t demand that they do that.

The baseline is to learn implicitly from feedback as it goes along, so decision support. And then the way, I guess we could go into this now, I mean expanding on the decision or fleshing out the decision thing a little bit more. We have a central metaphor to the product, which is it’s facilitating the interaction between three characters. And one of those characters is the user, which is typically a maintenance engineer or reliability engineer kind of person. And then typically their specialist knowledge or their superpower, if you want to put it that way, is that they know what’s going on on the ground, they’re next to the machine, they know what’s on fire and what’s not and things and all of that. But they maybe don’t always have specialist knowledge about those particular machines. Also, they don’t have the ability to look at everything at once, obviously, and they’re time pressured and so forth. But their superpower is this local knowledge, practical knowledge.

And then you also have experts. That could be a machine builder, it could be a Senseye delivery experts, it could be somebody else within the company who is an expert on robots, or it could be somebody at Siemens or this kind of thing. So the character, and there’d be somebody who has some expert on each of these particular kinds of machines about their physics and engineering that they might be able to add in. And then you have the AI. And the AI in our situation doesn’t have expert knowledge of particular kinds of machines. It doesn’t have local knowledge, but it does have a global view. It can look at everything at once and evaluate relative importance of different things and all of that. Okay. Decision support system, we try to construct it around that metaphor that you’ve got the user, optionally you have experts, and you have the AI, the product.

And these characters are working together in what we think of as a constructive conversation to try and help the user focus their attention and try and help them work through problems. And then I guess the thing with Senseye is we’ve had that as a metaphor, the way we’ve built things, but language models do this wonderful thing that they let you make that very concrete. And so the AI, instead of something which is in some sense dumb, it’s a semi mechanical thing that is just looking at scale that you can introduce real intelligence into it and the ability to actually discuss things with the user. And that’s ultimately where our product is going. And then also the ability to suck in knowledge from experts as well, to take on more of that expert role, and then it becomes a much more equal thing. Say the user is working with the AI, who’s like their colleague can see everything at once, but in the end stage of this also knows everything about the user, everything about the machines that it can reasonably access, get access to, know everything that the user has done, all the work events, all the feedback, all the properties of the data through our exam all time.

And bring that to bear at any particular moment in suggesting to the user, “Why don’t you have a look at this? Why don’t you have a look at that?” And talking to the user and then understanding again what they care about. So rather than the user leaving thumbs up, thumbs down feedback or writing something in a box, it’s a conversation. And the system can establish what’s going on, what this means to the user in a proactive way, and then use that information super efficiently.

And this is the way why language models and generative AI are so nice for Senseye, because we have this metaphor, we have these structures set up, it’s just we’ve never been able to make those conversations real and smooth and make optimal use of all of this disordered information. But now we can, and this is quite an exciting…

Spencer Acain: Right. I mean, it is an exciting time in the AI space, but it sounds like if you want to build this sort of conversational decision support system, you’d need a pretty high level of trust in the system to just be able to talk to it, rely on and have it be the global view, the bridge between all these elements to understand everything at a higher level. I like that it is capable of. So how are you building that trust that you’d need to have among users and experts to make this work?

James Loach: Okay, so there’s probably two aspects to this. So one looking backwards and then one looking forwards into a language model world, but looking backwards in some sense, trust in the system I think is promoted in lots of different ways. But structurally, one decision we’ve made is to focus the complexity of the system in the places where users can understand it easiest. Well, what’s the best way to say this? So you have some Senseye data that’s coming in from your machines and with general purposes could be any kind of data. And then you think, well, how are you going to make inferences from this crazy random data any data? Okay. So you can imagine build some sort of super sophisticated central model that’s somehow right, or you could imagine having loads of different models or different systems that would apply to different kinds of data and maybe the user could choose the one they want or configure them differently, all this kind of thing. But you can have this kind of algorithmic complexity.

Another approach you can make is you can not do that. You can have simple algorithms conceptually, so the user can model and understand what they’re doing. They can kind of know them in some sense. And you make that work by having data transformation. You convert this initial data into something that works well with less general algorithms, if you know what I mean. I would say the system itself is designed yet to be quite accommodating of all different kinds of data and do something useful. But if you want to make the system better in an explicit way, we allow the user to do that, not by tuning a new model or selecting of this or doing complex things on the modeling side, but by transforming the data, using a visual language that is quite easy to use.

And our thought is that we’ve designed it… well, we’ve designed a system then that kind of does its thing. And you use Senseye for a while, you learn the kind of things that it will pick out, the kind of things that it’s good at and the kind of things that it’s less good at. You just kind of know this. You can see it. And then if you want to improve on the things that it’s not optimal at, you modify your data, you do some transformation or remove off states or specify a control parameter like the line speed or specify a regime saying… or making part A or part B at different times, and that kind of cleans up the data or many different things like that. But it’s on the data side. And again, we think of this. That’s a deliberate structural choice there. It means that we focus on a certain way of doing things with algorithms and we do it well and clearly and efficiently. And that is the thing.

And then complexity outside of that in transformation that users can understand. But anyway, but the thing is, again, if you think of a decision support thing and you think of the system as like a colleague. I mean, obviously human beings trust other human beings that they’re able to model, that they know, “In this situation, that person’s likely to do this, I can trust them to do it.” Whatever is predictable and Senseye is predictable in that sense. And a massive different algorithms and algorithm complexity would interfere with that and make the system feel more out of control or hard to understand or obtuse somehow. Whereas it does feel like a reliable colleague in practice, even if not Einstein, somebody.

Spencer Acain: Somebody you can rely on to get the job done, even if they’re not going to be a genius at it.

James Loach: Okay. I don’t know if I should say that about the product, but these are what I mean by practical decisions, that does the way we’ve done it, it supports scaling in these ways that I’ve described. It’s does its thing, it’s trustable, it’s efficient, does something useful at the box, decent data transformation language, easy to use, the system feels under control and does feel like a reliable colleague. And then language models, of course, I don’t need to talk too much about that, this is an extra layer of sophistication. The system that I just talked about that can run, but then you have a system on top of that which knows how that system is running itself, and look at the results and can assess the relative importance of things and can in a human-like way maybe veto certain things that it reckons actually the user probably… there’s probably nothing, right?

Or to raise things to the user. And we have working systems that do this, and they’re very nice, like to raise a case, an alert, describe what’s going on, why the analytics produce this and say, but I reckon this is probably because you didn’t… it was probably because it’s a national holiday and you didn’t tell the system and the data stopped, that’s what I reckon is going on, and there it is, and it does that very well. This is a layer that you can add nicely on top and it gives it a more smoothness, more accuracy. And of course, you get to ask it why, “Why do you say that?” Obviously quotes understands introspectively, the whys of our analytics and whatever, and can explain the detections and give you advice if you don’t like it. And in the things that we prototype, do those things as well. Have control of the app. You complain about something, you can go change the setting for you and you’re good.

Spencer Acain: Well, thank you for those great answers, James. It sounds like AI has a lot of applications in this area, especially when it comes to building a conversational and easy to use way of interacting with tools and the information available from these sorts of predictive maintenance platforms. But I think that’s about all the time we have for this episode. So once again, I have been your host, Spencer Acain, joined by James Loach. Tune in next time as we continue our discussion on the ways that Senseye is applying artificial intelligence to the problem of predictive maintenance.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Spencer Acain

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2024/02/01/how-ai-is-optimizing-factory-maintenance-transcript-part-2/