AI for sustainability – Ep. 3 transcript
In our final episode of our AI for Sustainability series, Pina Schlombs and Eryn Devola continue their conversation about how industrial-grade AI can make industries more sustainable. They explore the biggest roadblocks to scaling AI and dive into the role of data quality, supply chain transparency and how digital twins can drive sustainability across industrial operations.
Read the transcript below or listen to the full episode here.
Victoria Carlos: Pina, you talked about how important it is to build that base with data and companies now have more access to data than ever. And that presents both an opportunity and a challenge. So how should they determine that quality data? And I know you mentioned use cases, but do you have any specific examples of that?
Pina Schlombs: Yeah, sure. I see a couple of points that companies should be looking at in determining the right quality of data, and I don’t think that we have a lack of data per se in the industry. It’s just that like 80% is not used today. We’re gathering tons of data every day but not making use of most of it. So, a couple of points in how I see companies determining data quality. The first one would be defining what data quality dimensions are relevant for your company in your domain. That could be establishing clear standards such as accuracy—we talked about that before—completeness, consistency, validity, timeliness, uniqueness, integrity, reliability, and relevance.
The next step I would encourage companies to do is what is called data profiling—using automated tools to analyze data for patterns, anomalies, and inconsistencies. This really helps in identifying areas that need improvement. Also, use manual reviews step by step to ensure the processes you’re implementing meet the goals or the standards for quality that you’ve set out in the beginning. And then also what a lot of companies need to do is data cleaning. You might have tons of data, but they’re not useful to be deployed for AI purposes. Implementing processes that help you not only identify where you need to correct or remove incomplete or inaccurate data, and automate that, would be a very good way to start building that base of data that I talked about earlier. Do regular audits.
Really ensure that with your day-to-day operations, the data standard that you might have established at some point doesn’t get diluted down the line. And then last, I would say establish data governance. If you don’t have it already as part of your data strategy, establish a framework that includes policies of how to not only gather data but also how to ensure that you have the right data that you can trust and actually make decisions on—because that’s what we want to be able to do in the end, that we can have confidence in the decisions that we can take with the data that we’re leveraging here. So those would be the main points I see and would encourage companies to take into consideration when looking into the quality of data.
Victoria Carlos: Speaking of data, supply chain transparency is crucial for creating sustainable industries. How can AI contribute to the flow and exchange of data between companies? And why is it so important?
Eryn Devola: You know, with data, we first need to maybe rethink. We were told for years that data is gold, and some people took this gold and buried it in their backyard—we don’t want anyone else to know where it is, and we don’t want to share it. I think the most critical component of being able to use AI, added onto the quality piece that Pina already talked about, is really about sharing—being much more open with sharing data within our own companies, but also within our supply chains.
Putting these two things together is really a cultural thing within our organizations—to really be prepared to share unless there’s some reason not to. And that’s been a hard one: getting people to dig up that data out of their backyard, dust it off, and be much more willing to share it. We’re seeing that in companies in certain industries. For example, the Together for Sustainability initiative in the chemical industry—they’ve really gotten together and determined how they want to share data with each other in the pre-competitive space but also along their supply chain.
Their suppliers are using the same language and the same types of data to communicate, to share, and to build a more complete picture of the footprint of the products that they have. To me, that’s the foundation—having data that’s in the same language, in the same units, in APIs that speak to each other and recognize one another. Then you can put the analytics and the artificial intelligence right on top of that.
So for me, that’s really a bit of that foundation: having people that understand what they need in your supply chain and in your industry, having some availability and quality within that data, and having people in your network who have similar philosophies and similar goals moving forward. Then you can really use that data to create the digital twin of that supply chain. You can really see what’s going on and model what changes would look like in the supply chain. But for me, the first thing is that cultural change—getting people to be more willing and more open with the data they have and knowing what use cases that data serves.
Victoria Carlos: Yeah, definitely. But talking about the roadblocks and challenges when implementing AI is also a really important factor here. According to a recent Reuters report, 38% of respondents said the most difficult part of AI adoption is projecting and measuring its return on investment. So, what are some potential roadblocks companies could face when scaling AI? And how can they, I guess, see that return sooner?
Eryn Devola: From my perspective, I really see this pulling together a couple of the topics we touched on—first is cultural: getting people to understand that AI is a tool, not a threat. I need to share my data, I need to share my expertise with the AI so that it can help me. That, I think, is the first roadblock to getting adoption forward. We also talked about getting the data to be more integrated—where I can connect pieces of data that relate to one another. But I think you need to do that first to understand the return, especially the time horizon of the return. I think AI can do more than we understand, and without first understanding what we have, I don’t think we can get the full perspective of how it can help us. I would say any return you think you can have—I would double it, because there are things that you’ll get from it that you don’t even see yet.
Pina Schlombs: Yeah, I can absolutely support that. I think the potential that AI brings is really beyond what we can even begin to perceive today. That, of course, makes it very hard to calculate what the actual ROI is going to be—if the benefits are even greater than we can imagine. So, tapping into why we struggle, and what hurdles companies face—we touched upon a couple of them already around data quality, maybe also availability of the right kind of data. But then also integrating AI into legacy systems.
How can we achieve scale with the availability of this perfect technology? I mean, even what we do—we integrate and embed AI capabilities into the legacy systems that we have. That makes them available to teams as soon as they’re integrated, but it also democratizes and makes them intuitively useful in a way that teams and people are already accustomed to working. Seamlessly integrating these capabilities into the systems and processes that people are working with today—at the right points. I think this is something crucial we haven’t touched on until now: identifying at which points in the processes AI is most useful.
Sometimes we’re very enthusiastic about everything it can do that we start solving problems where it’s not the most useful, and then we’re disappointed we’re not getting that 10x increase in speed. So understanding where AI is the most helpful—like with complexity or where we might need specific expertise that we don’t have available—and then applying it to those use cases. Of course, beyond the technical and organizational change implications, I do see the need for leadership inertia. I think we have a positive example of that—having leadership in companies tap into this powerful new technology, show what it can do, even in their own domains, and encourage forward-thinking.
Supporting this exploratory adoption and getting rid of the fear of being displaced by AI, which I don’t think needs to be a fear today. Getting that hurdle out of the way allows people to explore and experience what AI can do for them.
Victoria Carlos: That’s great, thank you. We’ve talked so much about the benefits and what AI can really offer and bring to sustainability, but it’s impossible to ignore the computing power that AI requires. So how is Siemens addressing the environmental impact of AI, particularly within data centers, and in what ways is Siemens contributing to the solution for this?
Eryn Devola: I think as we know, innovation is really driving a lot of this. We’ve even seen with the rollout of Deep Seek relatively recently that there are different ways you can process data, different ways you can filter that processing. We have a lot of expertise here because we’ve got so many simulation products on the market which require quite a bit of energy to process, and we really look at how we prioritize and how we check that before we invest all the way through in the full processing. A lot of it is about designing the software itself—helping our customers do that and design their AI with energy consumption in mind at the beginning. I really liked what Pina said earlier about when you train your model, especially in the area of industrial AI, giving it the right data and not necessarily all the data.
When you really have a clear use case, you can then give the model the data it needs to make those types of decisions—the expertise in that domain, not every domain. So there’s a little bit of focus that I think is needed to drive that through to minimize use in the energy center. That’s the first thing you want to do: minimize the use. Then once you’ve gotten it down to what you think is the optimal amount, ask how do I electrify and put on renewable the cooling and the processing that’s happening within those facilities. We actually have quite a few solutions within our Smart Infrastructure portfolio that can really help people drive that through to minimize the use in the energy center.
There are already data centers with AI-powered cooling management, like the GREENERGY data centers in Estonia, that are really showing us this is a possible way to do it. The innovation will help us figure out how to move toward a more lean and sustainable implementation of the AI that’s helping us become more sustainable in how we produce the products we use.
Pina Schlombs: Yeah, I think what is beautiful and builds on what Eryn’s been sharing is that AI really comes in to optimize itself at every level of the AI tech stack. So, whether that’s from the hardware design—where we have our PCB design software with Expedition that optimizes thermal management and resource allocation—or designing the architecture with our portfolio, or the next levels of the applications themselves. That can be, for instance, with our Mendix portfolio.
And the beautiful self-feeding cycle here is that AI can be used to actually improve its own applications.
Victoria Carlos: And finally, I want to move back to success stories and talk about companies who have successfully implemented AI for positive sustainability results. Do either of you have any examples of that, whether that be in our own factories or external companies that use our Siemens portfolio?
Eryn Devola: You know, one of the best things about AI is it allows us to get a custom approach. For example, if you’re trying to fit everything through the same model, you might have to buy the biggest shoe size or the biggest shirt to get everyone to fit in it—and we see that a lot in our industrial solutions. We’ve worked with a company called Automation Innovation, where they’re responsible for cleaning molds for glass production.
What was happening is, they said, for the dirtiest molds or the most extreme cases, that’s how they had to design their process. That means it was a long process with chemicals that carried environmental and safety risks. But instead of saying, “everything needs to take the size 15 shoe,” they were able to customize the approach for each mold coming through—using 2D and 3D imaging data, looking at the digital twin, and deciding based on the condition of that specific mold what the right way was to clean and prepare it for the next round.
Being able to do that gave them the ability to reduce the process from up to 5 hours down to under a minute per mold. I’m not kidding—it’s a huge difference. Because I’m not trying to run every possible step for every mold anymore. I’m customizing. And they were able to eliminate the harsh chemicals while doing it. So, as they implemented this innovative robotic solution with AI, they reduced raw material use by 700,000 tons annually, prevented the emission of 1 billion kilograms of CO₂, and cut onsite energy use by 30%.
This is why we talk about how being more sustainable often also makes you more resilient and more profitable. What they did was take AI, eliminate waste, and in doing so, make strong strides in sustainability at the same time.
Pina Schlombs: A super strong example. Maybe I can share one where a customer leveraged AI in a different domain—closer to engineering. This is a company in the electric and electronics industry: Harting. They trained a design co-pilot on all their historic design data from their PLM system—where all the drawings, CAD models, and simulation models are managed.
They trained the co-pilot so it understood which requirements and specifications determine what kinds of designs are needed. And what they did—very cleverly—is allow this design co-pilot to translate from natural language. So, you or I could say, “I need a connector with these specs; it needs to fit in this space with these limitations,” and within minutes, you could watch it create a brand-new design from scratch—including the drawings, the CAD model, the simulation models—and even optimize for environmental footprint.
Because it understands what goes into the products, how they’re structured, how they need to perform, and how they would perform in the real world. Then, it delivers a validated, simulated design as feedback—within minutes—without anyone needing to manually touch any part of the process or even be an engineering expert. That’s really an astounding example to me.
Victoria Carlos: Thank you. It’s really cool to hear about companies who are implementing AI solutions and seeing both sustainability and profitability results. And thank you again, both of you, for joining me today. I think this has been an awesome discussion. But lastly, before we sign off for the day, are there any last words or bits of advice you’d like to leave potential AI adopters or our listeners with?
Eryn Devola: I would just say—share data. Let’s open up the goldmine that we have in the data that’s already being collected.
Pina Schlombs: I would say embrace the powerful technologies we have, especially AI, in convergence with digital twins, IoT, 5G, and 3D printing, to create the breakthrough innovations we need in our quest to transform industry for sustainability.
Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.


