The importance of considering ethical challenges and balancing speed vs. trustworthiness during AI implementation for digital industries — Part 1

Artificial Intelligence (AI), in general, and Generative AI (GenAI), in particular, can have a big impact on almost every aspect of the product management lifecycle in Digital Industries. Specific to system design and production, this impact could be felt from the concept phase to detailed design to simulation to manufacturing and beyond. However, for the impact to be positive, we must abide by certain ethical principals in addition to legal ones.
Recently, I was asked by our strategy group to answer certain questions regarding ethical challenges posed by AI specific to digital industries and thought that the answers might have benefit for a broader audience; thus, the justification for this two-part blog that I hope will inspire more thought and conversation on the topic.
Note that the following answers reflect the state of current AI models, and that my opinions may develop as the industry does. Also, note that this blog was created with assistance from a GenAI tool.
What are some ethical challenges engineers encounter when utilizing AI?
Ethics in Artificial Intelligence or AI is comparatively more critical in engineering and manufacturing because the stakes are high: we are dealing with safety, compliance, environmental impact and in some cases, human lives. We know that today’s AI must never be the sole decision-maker for safety-critical applications. There must be a human-in-the-loop process, explainability of AI decisions, and rigorous validation of models. For instance, Large Language Models or LLMs are from a branch of AI called Generative AI or GenAI. These LLMs are notoriously confident even when they are wrong. They can fabricate standards, citations or even entire parts lists.
In my own group, we had a case where we gave a fine-tuned LLM the problem of creating a new droner from an existing one by changing some of the requirements. The additional parts suggested by the LLM for this new drone seemed logical and had reasonable part names and numbers, but some of them didn’t even exist! Engineers should always require any LLM to cite sources and express uncertainty and then verify all safety-critical answers through simulations and rigorous testing.
Another challenge is confidentiality and protection of IP for us and for our customers. Uploading engineering documents or part specs into public GenAI tools (like ChatGPT or Copilot) can expose trade secrets or violate contracts. We should use controlled models like SiemensGPT and/or processes like Siemens AI Attack. Finally, AI-driven optimization might favor cost or performance, but neglect sustainability and/or social impact. What if AI produces a product that performs well in simulations but is hard to inspect, repair or recycle? AI may also suggest materials or processes that are efficient but environmentally harmful (for instance, suggesting the use of high-carbon footprint materials). Unless prompted to consider sustainability and societal impact, the AI model will optimize blindly. So, engineers must ensure objectives align with human values, not just mechanical efficiency.
Hopefully, what I have stated here provides a glimpse of the ethical challenges related to AI in digital industries.
Can you share specific examples of unexpected ethical challenges engineers have had to face when using AI?
Here is a real-world example in using GenAI to auto-generate part geometries based on requirements (e.g., “Generate a bracket to support 20kg at 60mm offset,” which means the load is applied 60mm away from the bracket’s mounting point). A competitor’s software was used to prototype this type of component faster. One thing that went wrong was managers didn’t trust outputs because they couldn’t verify structural integrity and safety without separate simulations. The second issue was that regulatory teams blocked deployment due to a lack of traceability in AI-generated design logic. The third issue was the performance bottlenecks due to model size and the vast computing cost, which would have also had a negative environmental impact because of high energy requirements (something that is rarely considered when discussing AI). The bottom line is that we must establish trust for our management and customers regarding AI. End users want to know why the AI says what it says, and I repeat again that AI should not make the final decisions and should merely act as a human companion for design inspiration, support decision-making, speed up information retrieval and provide a mechanism for documentation.
As we recently acquired a life sciences company, here is another example from the biomedical industry not related to the acquired company. AI can flag potential tumors, lesions or anomalies based on patterns learned from vast datasets of annotated medical images. The AI performs worse on images from underrepresented populations — for example, skin cancer detection models may be less accurate for patients with darker skin tones because the training data was mostly from lighter-skinned individuals, and this may lead to an increase in misdiagnosis rates in certain demographic groups.
Therefore, as engineers, we must recognize the sources of mistrust such as bias in our AI models and address them before they are released to our customers. I have written and spoken extensively about this topic in multiple blogs and podcasts over the last few years. The links to these publications, along with a sketch illustrating the pillars of trustworthy AI and relevant critical factors, are shared with the readers in Figure 1 at the end of this blog.
What ethical concerns do engineers face in design, development and manufacturing processes?
In my opinion, designers, engineers and manufacturers today place too much trust in AI-generated outputs even when they don’t fully understand how the models work; this unjustified trust could lead to safety risks when incorrect or biased outputs are used in critical systems like airplanes, vehicles or medical devices. For example, we know that AI models can learn biases from the data they’re trained on, which might not reflect the diversity of real-world use cases; a predictive maintenance model trained almost exclusively on equipment from one manufacturer will most likely fail when applied to a more diverse set of systems.
We also know that many AI models are “black boxes.” This lack of transparency clashes with the need for explainability, traceability and compliance in design and engineering. For instance, engineering systems increasingly collect user data from smart factories or autonomous vehicles and use this data to train AI models. Some questions we need to be asking include: how is that data used? Who owns it? Was it collected with informed consent? How was it labeled? Furthermore, when something goes wrong, who is responsible: the engineer, the data scientist, the AI vendor or the model itself? Traditionally, we at Siemens emphasize accountability, but with AI in the loop, blame becomes ethically ambiguous. Another concern of mine is that as engineers grow accustomed to relying on AI for decision-making, they may become out of touch with the underlying principles or intuitions developed through hands-on experience. If this trend continues, critical thinking and domain expertise could erode over time, impacting innovation and safety. Therefore, we must pay particular attention to how data is collected and used to train AI models, ensure that those models explain the reasoning behind the decisions made, remove any behavior such as bias that could cause harm and guarantee that our employees are continuing life-long learning and using AI to gain experience rather than following it blindly.
In part two of this blog, we will discuss a few strategies engineering teams can use to balance speed and trustworthiness, talk about the ethical challenges organizations face when dealing with AI, and then finish with some thoughts on how we can overcome these challenges.

My related public blogs:
- Trust, the basis of everything in AI
- Achieving ethical AI for industrial applications
- Detection and mitigation of AI bias in industrial applications – Part 1
- Detection and mitigation of AI bias in industrial applications – Part 2
- Detection and mitigation of AI bias in industrial applications – Part 3
My related public podcasts:
Figure 1: Pillars of Trustworthy AI and links to blogs and podcasts related to AI Ethics.