AI has risen to prominence in a number of fields over the last decade, offering benefits that would be impossible using conventional computing methods. But it has also given rise to its fair share of problems, some more easily resolved than others. For example, the demands for computing power to train AI algorithm or miniaturizing enough to put AI in edge devices were easily addressed by the continuous rapid pace of technological advancement. However, other issues are only coming to light now as AI algorithms start to reach a mature state.
The most prominent issues for both researchers and industry professionals is that AI algorithms are basically a black box. What happens between the algorithm accepting an input and returning the output is a near total mystery, even to the people who created it. This creates a situation that demands complete trust in the algorithm, something totally at odds with the need for process accountability and something that good engineers, designers, and technicians naturally try to avoid.
In manufacturing, accountability is important, especially when something goes wrong. It is vital to be able to track a process from beginning to end, but unfortunately, AI does not allow for this. A core part of this issue is that the artificial neural networks (ANNs), the type of AI most commonly deployed in industry, can only effectively extrapolate across the range of their training data. If the algorithm is trained to know that a single part weighs 1lb and four parts weigh 4lbs, it would be able to tell you how much two, or three parts way, but not five. To truly employ an ANN to the factory, it needs training data that covers every possible situation so as not to inadvertently generate a bad value when encountering something outside its prediction range. However, gathering sufficient training data across such a wide breadth of scenarios is impractical, bordering on impossible for many applications.
Hence the rise of physics-informed AI simulation, especially physics-informed neural networks (PINNs) that help remove some of the mystery and uncertainty about what is happening between the AI algorithm inputs and outputs. PINNs shake up the standard AI algorithms by including a very fundamental form of new training data: the laws of physics themselves. By combing a selection of crucial physics equations with an ANNs ability to learn, it is possible to create an AI algorithm capable of extrapolating across a much larger range of inputs – without the need for vast quantities of actual data.
The move to physics-informed AI addresses one of the core issues of AI: quantity vs. quality of data. A traditional ANN trained with only production data could likely attain a similar level of performance and scope as a PINN, however it would require anywhere from thousands to millions of data points to achieve that and months of training. By contrast, a PINN could be trained with a much smaller amount of data, using physics to fill in the gaps. In this situation, equations are effectively a type of very high-quality data, each one taking the place of thousands of data points by explicitly defining a relationship across an effectively infinite range of potential inputs.
Returning to the idea of AI as a black box, PINNs do not entirely eliminate this issue. PINN’s still rely on neural networks and that black box can be hard for experts in manufacturing to trust. But now that AI engine is built on a basis of well known, proven equations the inner workings of the system are not a total mystery anymore. This creates something new, a so-called, “grey box” where it is still not possible to say exactly what is happening inside, but neither is it completely unknown, thanks to the fact that it is built upon well-known laws of physics.
At the end of the day, PINNs represent another step forward in overall AI development, helping to demystify some of the inscrutable nature of AI while simultaneously addressing AI’s insatiable need for massive training data. PINNs are already being relied on to great effect in some factories, and the solutions they develop may one day be integrated into other types of AI, helping develop a new generation of algorithms that are smarter and more aware of the physical world than ever before.
Siemens Digital Industries Software is driving transformation to enable a digital enterprise where engineering, manufacturing and electronics design meet tomorrow. Xcelerator, the comprehensive and integrated portfolio of software and services from Siemens Digital Industries Software, helps companies of all sizes create and leverage a comprehensive digital twin that provides organizations with new insights, opportunities and levels of automation to drive innovation.
Siemens Digital Industries Software – Where today meets tomorrow.