Advanced robotics and AI in industrial manufacturing- Episode 2 transcript

Chris Pennington: Max, I wonder if I could just ask you to provide some examples of practical use cases where you’ve had robotics projects that have improved production efficiency?
Maximilian Metzner: Sure. We always try to focus on scalable processes to develop automation solutions. These processes are not one off that you build a concept for, do the engineering and everything, and then have one use case, but something that you can do basically on scale with little adaption. One of those use cases is for us clearly screw driving.
If you look at our product, most of our products are held together by screws, sometimes dozens, sometimes more. So, screw driving is something that really eats up a lot of operating time on our shop floor, but it’s also process that usually doesn’t really require that much flexibility because it is a very standardized process and the operated surface also just manipulating a tool basically. This is somewhere where we very much made use of.
Cobot capabilities are now out there in the market, and this was also one of the entry doors we had into the automation journey. We have, for example developed a standard screwdriving station that is just as wide as a manual operators screwdriving station. It’s basically swappable and it’s also a powerful limited cobot application. You can actually deploy it without the need for fencing for lighter scanners or anything like that. The automated process can just run next to the operator without them interfering or stopping each other in the process, which allows for a nice flow in your production system, while still allowing you to automate some of the work that is being done to the product.
Another classical example of electronics production would be PCB handling, so you have the printed circuit board, or the printed circuit board assembly and you have to do certain things to it. You have to place parts on it. You have to solder them. You have to electrically test them. Then you have to optically test them. You bring on a conformer, you harden that coding, then you maybe also do an inspection on the coding and then you delaminate the PCPA. This is a standard process flow that’s the same anywhere you go in the world in electronics manufacturing.
Here [at Erlangen], most of the process steps are classically automated. Very few people do the soldering by hand. Today they have a soldering machine. They also have a coding machine. They have an electrical test machine. But what they often do is they have a person taking the PCB from one machine into the other one, because most of the time it’s not a very simple flow that you would just use conveyers for, but you have to differentiate between certain machines. You have to change fixtures and stuff like that. This is also something that’s very prone to automation.
The handling between the machines requires some flexibility in the motion, but it doesn’t really require flexibility and knowing what you’re doing, because the process sequence usually is the same every time. This is also something where we had a huge leverage in actually improving efficiency by going from a semi-automated PCB manufacturing to in some cases a fully automated PCB manufacturing where also in those stages no human is required to assemble and produce the product.
Chris Pennington: I’ve often seen Erlangen held up as an example of a modern, efficient Siemens flagship factory. It would seem that the acceptance of industrial software has helped in this. I want to look into the future and discuss what technology you hope to see mature in the next decade or better help companies like Erlangen to improve things for their customers?
Rahul Garg: Yeah. I think from a technology perspective, there are various capabilities Max touched on in the beginning. The ability for robots to sense different environmental things that it’s interacting with is going to improve more and more. The ability to get the right vision capabilities and the ability to touch and feel different things, I see those having a big, big growth area because the sensing part is eventually gonna lead to the next part which is trying to decide the action and do the analytics on what should I do once I get some sense.
And then once I decide on what I need to do, then the action gets taken. I feel the in a typical robotic environment, the action fields are well understood, and I think that’s matured quite well. Where technology probably needs to mature more is in the decision-making part and in the sensing part. As those become stronger, we will see a much more rapid adoption of robots in many different parts of manufacturing.
But having said that, Max spoke about the hundreds of robots that are now in production. I’m guessing they probably are actually using from robots from many different manufacturers as well. So, how do you deal with that and how do you align your manufacturing systems where you have to work with possibly different providers and different technology platforms?
Maximilian Metzner: Historically we started with mainly 11 robot brands. As I said, we came from manual production. We had the parameter ease of use as one of the most critical purchasing decisions because we didn’t have the expertise to just use any robot, but over time, we’ve built up experience. We’ve also gotten more to the strategy of using like the best robot for the use case rather than just the robot we know.
So, you get into some challenges like not every maintenance person knowing how to operate any type of robot. And then so on and so forth. The same goes for programming and so on. This is somewhere we use the Siemens product, which is the semantic robot library, which allows the robot programming to happen directly on the PLC. This helps you in one case that you don’t have to find the error in a distributed system. Was it the PLC? Was it the robot controller? Was it some peripheral that maybe did something wrong? Or is the waiting for a signal and that’s why the machine stopped? But you have it all in one place: your TR portal.
It also allows you to basically switch out robots between vendor A and vendor B, at least for the programmer and also for the person that does the maintenance or wants to reteach a point that is basically identical. And you can then go and choose the right robot, not just the one that you’re familiar with programming.
Rahul Garg: That’s a good one, and I would imagine that gives you a lot more flexibility in making sure that you’re working the right solution versus for sorting it into your environment.
I wanted to ask you one more question. Max, you spoke briefly about using a digital twin in your planning process and even in in some of your offline programming. Could you talk a little bit about how you have incorporated some of these technologies around simulation and digital twins in your production facility? And how that has helped you as well?
Maximilian Metzner: We use the digital twin as the single source of truth and why we’re doing the automation concept. For us, that involves having all the product info, and also all the resource info, which means the machines and the layout of our shop floor in one system that basically everybody can access so they can use that information so that they don’t work on an old version and then you get a machine plant that doesn’t really fit into the layout anymore.
For this, we’ve actually started or have now finished completely digitalizing our shop floor. We now have a 3D layout model of our entire factory that is millimeter accurate. We can now plan in the digital world without having to fear that during implementation things will come up that are just not as they were in the plan, and we can actually trust that the virtual data is really correct.
This involves the machines that we bought from external vendors, and also infrastructure and everything that you have in your facility today. We use the digital twin basically from start to finish, from the design of the robot cell. We would normally do this in annex to the basic simulation; getting the layout right of your robot cell, making sure everything is in reach, making sure the motions between the points actually make sense, that the robot doesn’t change configurations all the time, stuff like that.
From cycle time analysis material flow simulation to estimate the entire system behavior and then down to robot offline programming where you really program the robot in the virtual world and download the finished program onto the real robot once it’s installed. Then onto virtual commissioning where you also integrate the control logics of the PLC into your simulation to make sure that 2 robots working closely together never execute the same program while next to each other that for would make them crash. You can very clearly check if you’re interlocking, if it works right, if the robot motions are in a way that this machine is always in a safe state and doesn’t damage itself.
Rahul Garg: I imagine that would be of tremendous value because being able to test and evaluate all that in a digital world versus the scary alternative of having robots crash would be a lot more dangerous.
In this whole context of a digital twin, and given some of the examples you’ve just mentioned, do you also incorporate a human digital twin in that process?
Maximilian Metzner: Yes. That could be happening in, let’s say classical human workstation design. Here also digital twin can help you get the grasp areas right and place everything in a way that’s ergonomically reachable, while helping you tremendously when you’re talking about automation systems where you want to make sure that the operator next to it can do his job and his actions in an efficient way, without triggering the light fence all the time.
But that doesn’t stop just at simulating a human. What we found really valuable is the ability to integrate the real human, not as assimilation model, but via means of virtual reality. In our simulations, it will show the operator how a station might look that’s not even built yet and how the robot that’s working next to them is working. They can experience that beforehand without having any fear of what the next motion might be. Is this coming close to me? Or something like that.
It also helps our design people quite a lot because it’s always a different of looking at something on a screen. Most people doing 3D CAD or 3D simulations will develop that sense of how it might look spatially, but it’s still different looking at it in virtual reality or then even later on also an augmented reality projecting something designed onto the real shop floor where it’s supposed to go. And this gives you an impression of how tight that aisle somebody has to walk down is. Something like that is really hard to judge just from screen and just so much easier also for non-experts to do when you just show them and give them a door into the virtual world.
Rahul Garg: This is quite fascinating where you actually are combining 3 different scenarios. You have everything in a virtual environment. Then you have some things in a virtual and real environment, and then of course you have the real environment. You really are taking advantage of technology to the fullest to bridge the virtual and the real world.
Just in this context of the virtual and the real world and bringing in humans and evaluating all of that, I would imagine that as you all are looking at different areas to focus on, having that digital factory and that digital representation down to millimeter level probably gives you the ability to explore other areas for further automation.
Maximilian Metzner: This is actually one of the first stages that we do in any automation project. Usually, somebody has an idea of what could be automated.
Most of the time it’s actually the people on the shop floor and the line managers that actually say, “hey, I’ve got this process here. Its non-ergonomic qualities are maybe poor, or repetitive, and seems like something that you might want to take a look at.” Then, you can very quickly go and do this evaluation. If I automate this process, is it feasible? Does it fit there? Can I fit an automation solution there or do I have to change the entire line?What does this mean? Because you have a line right now and you’ve done your analysis and you have your bottleneck. You know what the throughput time is and everything. And people calculate with that.
If you now go into an automation, things will change. The automation shouldn’t be the bottleneck, but maybe something switches. You maybe have a little bit more throughput time or something else. Being sure what the results of the automation will be and how that will affect other systems around it is really something that we found is very easy to do in a digital world because you can do many scenarios in very little time. It’s definitely worth doing beforehand because you don’t want to integrate something and then find out what the results are.
Chris Pennington: Max, we’ve talked a lot about the actual capabilities you’ve deployed, and I’d really like to move and finish off by looking at the quantified ROI that perhaps you’ve had delivered from these projects. Also, what kind of worries have you seen, if any, when it comes to robotics. Are you getting people feeling that they fear for their jobs and how is that being handled within?
Maximilian Metzner: The results we have are really basically always based on efficiency. We do automation as a strategic topic, but we don’t just do it no matter the cost. Each one of our automation systems pays for itself, so to say, and we often achieve our goals under two years with our systems. Sometimes, you have other factors, like making a workplace design or quality factor in.
Just to give you a really clear example for one product that we produce: we already had a production for it and there were18 operators doing that job. After automating it, it’s down to three. They’re basically running the machines, and this was an investment that paid back in less than two years, which is obviously a great impact economically.
This also helps in some ways to deal with some of the flexibility we need. This is always something people are a little confused about. Many say, “well if I do the automation, don’t I lose flexibility in how to work.” And I think it’s actually the opposite. You don’t need people to come in at night. You don’t need people to come in on the weekend to get more out. The machine can just run longer, and you need less personnel to support it. That’s actually a bonus.
But concerning what happens to the employees doing the jobs today and whether there was any resistance and how was this received—I think this is a misperception. From our side I would say it’s actually the other way around. It’s not a fear for your job, it’s actually a chance to get a better job. As I mentioned, I think every country in the world is dealing with demographic change. There are just less people going into the workforce than are leaving it. You will have less operators whether you want or not.
The question is if these operators do a skilled job and operate automation systems and stuff like that, or if your output just goes down because you have less people doing the job of assembling something, for example. We’ve actually did a very good job at actually bringing automation and employees together. In actuality, our maintenance personnel for automation systems and the actual implementation roles at our factory that build the systems program are all formally people from our shop floor. They have been trained; they have been upskilled. And they can now do this higher skilled work at different pay grade.
In this way, we can compensate for the general workforce shrinking, while still offering the people that are actively working at our place the chance to actually improve their jobs, improve their pay checks due to automation. Rather than having the fear of losing their jobs, there’s more jobs to be done actually. And we will have people to do them in the future.
Chris Pennington: Thanks for joining us today, Max. It’s been great to share your knowledge and experience with our listeners.