The importance of considering ethical challenges and balancing speed vs. trustworthiness during AI implementation for digital industries — Part 2

Artificial Intelligence (AI), in general, and Generative AI (GenAI), in particular, can have a big impact on almost every aspect of the product management lifecycle in Digital Industries. Specific to system design and production, this impact could be felt from the concept phase to detailed design to simulation to manufacturing and beyond. However, for the impact to be positive, we must abide by certain ethical principals in addition to legal ones.
In part one of this blog series, I discussed some of the ethical challenges engineers encounter when utilizing AI, provided some specific examples to better explain the challenges, and ended with ethical concerns specific to design, development and manufacturing. In this second and final segment, we will look at strategies to overcome these challenges and concerns at the individual and organizational levels.
Note that the following answers reflect the state of current AI models, and that my opinions may develop as the industry does. Also, note that this blog was created with assistance from a GenAI tool.
What strategies can engineering teams use to balance rapid innovation with ethical concerns?
A good practice is building ethics into the development process at the outset; for instance, use ethics checklists at design review stages like code reviews or sprint planning. We must apply value-sensitive design principles to ensure that all system functions, including privacy, fairness, cybersecurity and safety align with stakeholder values. Furthermore, we should have internal teams intentionally probe the system for misuse, bias or unintended consequences to surface blind spots early in development, and run workshops on “What could go wrong with this AI model?”
Another good strategy is creating clear ethical guidelines for the organization by drafting internal AI ethics principles, something like: “We will not build models that can’t be explained” and then ensure they’re not just written, but tied to Key Performance Indicators or KPIs. We must also consider diversity by including people from different disciplines and backgrounds to surface unusual cases and make it easier to identify societal impact and bias.
We should always monitor and measure impact that tracks not just performance, but also ethical indicators like fairness across demographics, and use post-deployment customer feedback loops to involve voice-of-customer and continuously improve our AI models. Finally, in addition to customers, we must involve partners, affected communities or groups, and regulators during the entire development process and rigorously validate any AI-enabled system before launching. In other words, co-create and co-verify solutions utilizing customers and their end users by first educating them on how to effectively work with any AI-powered application and then collaborate with them to improve it.
What are the greatest obstacles organizations face when trying to utilize AI?
Connecting AI with legacy systems that have a large amount of data is a challenge that is made more complex by GenAI because it requires real-time interactions and access to vector databases to find relevant and up-to-date information. No AI model is better than the information it is trained on or fed during inference, meaning that unstructured or inaccessible data reduces performance. GenAI tools need well-organized, semantically meaningful content. Also, especially for GenAI, the price tag can be steep for fine-tuning a model or even just paying API usage costs. Cloud costs and model licensing aren’t cheap and running or hosting large LLMs in-house is expensive.
Another hindrance to the optimal use of AI is employees who lack the right skillset and use GenAI without receiving proper training on prompt engineering, fine-tuning and understanding how AI models, in general, and LLMs, in particular, behave. Also, what happens if any AI model in our portfolio discriminates, plagiarizes, or spreads misinformation? Accountability is murky with models that don’t show their reasoning. Thus, organizations should require explainability in AI systems. Related to this is the fact that employees fear job displacement or just don’t trust the model’s output. However, through the right set of training modules, organizations can empower their employees and teach them best-practices.
To address some of these obstacles, we at Siemens have created the AI Educational Campaign channel so that every employee is able to get the right skillset and learn how to effectively work with AI models.
How have the companies overcome the challenges identified here?
Blindly using AI in digital industries is like giving a recently graduated bright engineer incredible power but zero experience. We need to guide our employees and teach them that whenever AI is involved in decision-making about industrial systems, they must build proper legal and ethical guardrails that consider compliance and safety issues, as well as social and environmental impacts. As stated before, companies must establish trust in their AI-enabled solutions for the customer. I repeat again that end users need to know why the AI says what it says, and today’s AI models should not be making safety-critical final decisions and instead used as a human companion for design inspiration, support in decision-making, efficiency in information retrieval and documentation. Successful companies facilitate such augmentation of human capabilities through AI.
In addition to building hybrid processes that combine AI recommendations with human-in-the-loop decision-making, companies regularly audit predictions vs. reality and re-train models accordingly. They ensure diverse, representative datasets across all variations in engineering and social data gathered for training, testing and validation. Repeating for the nth time: Companies must ensure that AI is never the sole decision-maker for safety-critical functions, and they must get feedback from customers for continuous improvement of AI solutions. They must also provide the right tools for their employees so that those employees, in turn, are able to create systems that instill trust in their management and their customers.
I highly recommend that my Siemens colleagues visit the AI Educational Campaign channel and other AI-related educational resources available within My Learning World to learn more about AI and how to address its challenges.
For those outside of Siemens, you can visit the OpenAI Academy and Maker Academy to learn more about AI and read articles such as Introduction to Generative AI, Prompting Guide 101, A Practical Guide to Building Agents, Ethical Considerations of AI in Business and The ethical problems of ‘intelligence-AI’. The readers should also not hesitate to contact me directly if they want to discuss any of these topics in more detail or if they want to provide a different perspective so that I can learn from them. Finally, I note again that this blog was created with assistance from a GenAI tool.
I practice what I preach when I state that AI should be used as a human companion for tasks like documentation. Thanks for your attention and let’s invest in lifelong learning programs to help all employees adapt to changing job requirements and addressing challenges such as ethics when implementing industrial AI.