Thought Leadership

How AI is optimizing factory maintenance transcript – Part 1

By Spencer Acain

Recently, I had a conversation with Dr. James Loach, head of research for Senseye Predictive Maintenance in which we discussed the approaches and benefits Senseye is taking when it comes to bringing increased intelligence to the maintenance process. Check out episode 1 here or keep reading for a transcript.

Spencer Acain: Hello and welcome to the AI Spectrum podcast. I’m your host, Spencer Acain. In this series, we explore a wide range of AI topics from all across Siemens and how they’re applied to different technologies. Today I am joined by James Loach, head of research for Siemens and Senseye Predictive Maintenance. Welcome, James.

James Loach: Yeah, hi.

Spencer Acain: So, before we jump into the main topics here, can you give me a little bit about your background and your current work at Siemens and Senseye?

James Loach: Right. Okay. Yeah, so I have quite a varied background. I began in experimental particle physics, so neutrinos and dark matter, that kind of thing. So on the experimental side, so developing hardware, and doing data analysis, and model building, and all of that kind of thing. Right? So that ended up stretching Europe, Canada, and North America. And I eventually ended up as a physics professor in China. Okay? And about six years ago I decided to move outside of academia and return to the UK. And that’s when I ended up joining Senseye and becoming more focused on data analytics, machine learning, AI, and so forth. Okay? So, that’s actually the majority of the time that Senseye has been around. So I was one of the early employees. Okay? And, during that period, I’ve basically been responsible for the analytics. Okay? So, for developing all of the algorithms and the conceptual structures that they sit within. And I’ve also been part of the product leadership team, because it is one of those products in which the analytics, and the user interface, and so forth have to work nicely together. Right? So, yeah, I’ve also been on the product side a little bit as well. Okay? Yeah. Okay. So, Senseye in the early days all the way up to acquisition by Siemens, which is now almost two years ago. And yeah, I’m still in that role getting used to Siemens and, yeah, taking advantage, I guess, of the opportunities that Siemens presents us to grow. Right?

Spencer Acain: Yeah, I mean, it sounds like you’ve had a really interesting career, a physics professor in China to head researcher in the UK again, that’s pretty amazing. But, can you tell me more about what Senseye is, how it started, and what role AI and machine learning plays in the tool itself?

James Loach: Okay. So, I mean, it’s a normally a predictive maintenance tool. Okay? In Senseye, I guess, we define predictive maintenance quite broadly. Okay? And, in effect, what our tool is about is optimizing the usage of your maintenance resources. And, in particular the attention of the maintenance engineers. So, the principle behind it, right, is that it’s monitoring sensor data from machines of all different kinds, and it’s designed to be agnostic to the source of that sensor data, and the type of data, and the type of machine. Okay? So, it’s analyzing that data in conjunction with information about previous maintenance. So, work events that have been done on these machines. Okay? And then, it’s acting as a decision support system, where it’s working with the user using the sensor data and the context to focus their attention on the assets that need it at any point in time. Okay? Yeah. So, it’s a decision support, attention management system, and yeah, it’s agnostic to machine.

Spencer Acain: I see. And that decision support, is that where the AI is coming into this, because basically in order to support the decision, it has to make a decision on its end too, wouldn’t it?

James Loach: Yeah, absolutely. Right? So the machine learning comes in at various different levels. Okay? And, I would say, upfront we use a combination of all different kinds of machine learning techniques as well as a lot of traditional statistical techniques as well. Okay? And yeah, these are different levels. So, we basically have a bunch of systems which are looking at the sensor data as it comes in and analyzing that in a local context. Right? So, trying to look for things in that data that might potentially be of interest. So some unusual spikes, a trend, different things, right? Okay? This is marking up the data. And then, you have a system which sits on top of that, which looks globally. So, across all of the different assets that the person is monitoring. And as I said, tries to focus their attention on the behavior that most merits it. Okay? Yeah. So, those first set of systems, you have a lot of parallel pipelines, right? Like I said, different statistical machine learning methods. And then, the system that sits on top of that is something like a recommender system. Okay? So yeah, it’s trying to learn what behavior on assets gives good feedback on alerts, right, and just give music.

Spencer Acain: So, what’s a problem and what do I need to learn to see to pick out, and spot, and then present to the people who can do something about it basically? Not really different from training like an employee to look at a gauge or an incoming set of statistics that say, “Oh, if I see numbers above this here, then that’s a problem and I need to do something about it.”

James Loach: Yeah, yeah. And, it has to be in this paradigm because of the generality of the system, right? The system cannot absorb physics and engineering information, right, for all of these different machines that it’s monitoring. And if it could do that, and that was the idea, it would be not scalable to tens or hundreds of thousands of different assets, right? It would be very, very expensive to scale, right? So, because it lacks that physics, engineering information, this detailed context, it’s got to learn what matters another way. And the idea is to learn from the user, right? And so, yeah, you can think of the AI as just doing many things, right? Like you said, you can think of it as some agent, like a colleague that lacks the common sense of a typical maintenance engineer, right? But has got certain skills, the ability to look at everything at once, and notice very subtle things, and so forth, right? Yeah.

Spencer Acain: Right, right.

James Loach: Yeah.

Spencer Acain: Okay. So, I guess, we’ve been talking a little bit about predictive maintenance now and how is that different from a normal maintenance system? And is this something you can achieve without machine learning and AI? Or is it really a specific application that requires those new technologies?

James Loach: So, I guess, predictive maintenance narrowly defined, at least in my head, would be, yeah, about looking at the data and having models that tell you when these machines are likely to fail. Okay? So that you can order the parts and schedule the maintenance at the right time before that happens. So, it’s all future-focused like that, and precise, and about scheduling of things. Okay? Yeah. Like I said, since I really does look at things slightly more broadly than that, we do have functionality that’s designed to predict when machines are likely to fail. But most of the value that people get from the system is coming from… Yeah, some more general functionality, so the ability of the system to spot odd things, things that have not happened before, right, that you can’t reasonably predict or whatever in advance. Yeah, and just learning from the user in some informal way about what things maintenance engineers want to look at each day and what’s useful and what’s not. So this is broader like that. And if you think about the extent to which machine learning is used, yeah, I mean, we say some things about that, right? So, traditional predictive maintenance, to me, there’s this massive academic literature on this, it’s all about people making specific models using some machine learning technique to predict when some type of machine will fail in some situation. And it is just full of diverse, in some sense, interesting machine learning, right? But like I say, we’re not doing that style of thing, right? Okay? So, when you’re not building dedicated models design, you’re doing this general purpose thing. Well, okay, so certain aspects… I hate to say, I mean, I suppose the first stage of our system, a lot could be done with statistical techniques, right? But, yeah, I mean, machine learning obviously, I don’t know, in many cases is just a more efficient and flexible way to do things, right? And then, when you’re looking at recommender systems and what have you, yeah, I suppose machine learning, it’s a more sophisticated version of things we would call statistical methods. It’s a fuzzy line between the two, right? Something we might go into talk on later though is about language models, and generative AI, and all of that, right? That is a very nice fit for Senseye as a decision support system. Okay? And yeah, obviously, that is serious machine learning that enables you to do things that cannot conceivably be done with other methods, and yeah, language models in particular is a very nice fit for Senseye. Yeah, we’re a decision support system. We have all this text labels on different things. We have all our observations on the data. And just searching through that, figuring out what the user would be interested in, and expressing that to them concisely, and neatly, and contextualizing it in their local language, right? Yeah. This is perfectly suited for that. So, as I say, okay, to answer your question, you could do a Senseye type app using a lot of statistical techniques without much machine learning, but you have these tools available and they make things better.

Spencer Acain: Wouldn’t you be also running into an issue then if you were to just do this entirely with statistical methods where you wouldn’t have… You’d have to basically manually code in every edge case and every scenario and it wouldn’t be as adaptable to different tools and different applications and deployments. And, it’s just the impression I’m getting from your current tool that it’s designed to work with… You just set it up and it just can run on a huge swath of different machines, or processes, or whatever it needs to monitor.

James Loach: Yeah, I mean, I think that’s just correct, right? You just have greater flexibility in those models and using machine learning to pick up on patterns in the data. And, if you better models of normal, right, and better models of what people care about and all of that. Yeah. I mean, of course it’s central to what we do, and we’re glad it exists, and it makes our life easier, right? But, Senseye is not fundamentally about some magic new deep learning technique we came up with, or some super crazy giant magical model that we built, right? It’s a extremely pragmatic system that just uses machine learning when it’s useful and it adds value. But yeah, I mean, again, I can expand a little bit. I mean, I think that the reason that we have scaled and we’ve been successful at addressing a problem that is intrinsically very difficult to scale is because of very pragmatic, sensible choices, right, about putting the complexity in the right part of the product, right? Understanding where the value really comes from, understanding the role of the user and having the UX and the analytics work well together. So, I think of Senseye definitely as a product, right, and there’s a systems approach, and things surrounding the actual product as well that support it, yeah, rather than a machine learning thing with a UI attached to it or something like that, right? If that makes sense.

Spencer Acain: Yeah, I think I understand what you’re getting at here, that rather than the traditional focused approach, you’ve taken a much broader approach, but you’re focusing your efforts on areas where it really matters and where it’s really going to impact the user and their experience using the tool the most. Rather than trying to build, like you mentioned earlier, those really specialized systems that can look at one machine specifically. And then, you’d have to do that for all of your machines. You’ve created a general system that can just be deployed and used. It won’t reach the same level of like, “This machine is going to fail in 24 hours or 22 hours in 18 minutes.” Or something.

James Loach: Definitely. And I suppose we’ve always thought it’s a massive distraction to go down that direction. In any sensible run factory, machines are not failing all the time, right?

Spencer Acain: Mm-hmm.

James Loach: You get very few examples of failures, right? And then, yeah, in our experience, very often these failures don’t look like each other either in the data, right? It may be the same thing, but it shows up in different ways. And then, you’re dealing with all kinds of different machines with these handfuls of examples. And if you, yeah, focus all your attention on trying to crack that as some deep learning problem, yeah, it just doesn’t go anywhere, right? You really struggle to provide real value to people in their general purpose setting. Yeah. Yeah, I don’t know. Yeah. So, we consider that to be something that we’ve avoided, right, in favor of things, which I mean, are just more realistic in the real world and probably provide more day-to-day value to people, right, than a model that maybe has lots of promise attached to it and this and that, right? But yeah, I don’t know. I mean, to me, monitoring the machines, yeah, you just want to make sure things are okay, right? And if there’s unusual things going on, you want to be told about that, and know how often it’s happened before, and what things that might relate to. And yeah, that looks okay and we leave that for a while, and maybe it will bubble up again in a few weeks if it’s still going on, right? You have this sense that the system is acting as eyes for you. It knows what you care about, it picks it up, it’s monitoring everything, and you just trust it that when odd or strange things happen that you will know about it, and it will be a good use of your time. Yeah. And then, you can do your inspections and things, right?

Spencer Acain: Yeah.

James Loach: I feel it’s like a colleague. It’s like a colleague, and yeah, a box that sits in the corner and is designed to throw out numbers for when each machine is going to fail, right? And only does that 1% of the time and is not that accurate. That’s not very useful. That’s not like the picture we have in mind, right? And that’s very often what these systems turn into in practice outside of academia.

Spencer Acain: Yeah. Yeah, interesting notion to think about here. Because like you said, machines aren’t failing all the time, so why would you make a system that’s designed specifically to look for the failing machines when instead you could be looking at a system to make sure that the machines you have are running correctly, and then it doesn’t necessarily pick out a failure, it’s just picking out when it’s not running correctly so that it can highlight that for you.

James Loach: Yeah. And we like this quote that things that have never happened before happen all the time. Right?

Spencer Acain: Yeah, absolutely.

James Loach: For that class of thing, right? And maybe we might talk about this later, I don’t know, but yeah, it is a general purpose balance of plan system. Right? It wouldn’t be the right system for your jet engine or your nuclear power plant, where it’s worth hiring loads of scientists, incorporating all of the engineering, and physics knowledge, and doing all the testing, and all of that kind of thing, right? And, of course, factories often have assets like that that they want to provide that much attention to and that’s fine, right? But since I would give you that certain amount of value on these assets, they’re not as good as a team of data scientists dedicated to that machine would do. But the same system, you can just connect to everything in your factory, right? And, I think that’s just very nice, right? It means that you can focus your data science effort on the handful of machines that really need it, right? And then, you have this thing that you can trust to basically look after everything else. Yeah, that’s, again-

Spencer Acain: Yeah, makes sense.

James Loach: … How it’s often used and how we think about it.

Spencer Acain: Thank you for that great answer, James. But, I think that’s about all the time we have for this episode. So once again, I have been your host, Spencer Acain, joined by James Loach from Senseye. Tune in next time as we continue our discussion on the ways that Senseye is applying AI to predictive maintenance in industry.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2024/01/11/how-ai-is-optimizing-factory-maintenance-transcript-part-1/