Thought Leadership

AI Foundations: Understanding how neural networks learn

Understanding the building blocks of artificial intelligence.

In the world of AI, scientists, algorithm designers, electronics, and software engineers create advanced neural network systems focused on particular tasks, like object or speech recognition. At times, even to these experienced people, the way neural networks learn a task or produces the correct answer can seem like magic performed within a black box. This is especially the case for deep neural networks that employ unsupervised learning. If it works, why do they care how the networks learn? For debugging why a network is getting the wrong answer, for tuning the network and for testing the system, teams need to understand how the networks learn. For example, they might want to query the system to see why it recognized an object incorrectly. In the case of self-driving cars, this can be a life-or-death question. As a first step, developers often put debugging code into algorithms that simply log what is happening as the code runs. In the early days of rule-based AI systems, capabilities were added to query the system to determine which rules fired. But for neural networks a more robust solution is necessary. One solution to understanding learning is self-explaining neural networks. This concept is often called explainable AI (XAI).

The first step in deciding how to employ XAI is to find the balance between these two factors:

  • Simple enough feedback for humans to learn what is happening during learning;
  • But, robust enough feedback to be useful to AI experts for deep analysis and debugging.

Software routines within the AI system can capture robust information for AI experts like the weights being calculated and how they relate to the equations calculating factors within the system. This allows the experts to tune the network, optimizing variables to get the best possible performance and accuracy for their particular application. But, let’s not take a deep dive into that world, as most of us are not attempting to become AI experts.

For simpler feedback at a higher level of abstraction, there are many techniques that exist to present clues as to how a neural network is training itself and how it is making decisions. The results of applying these techniques provide valuable insight into the network itself in order to understand the training process. These results are often quickly interpreted by humans. For example, an object recognition system can present a visual representation of the pixels or edges of a picture that it determined where key to identifying an object.

For example, a neural network could operate on an image of a sailboat. The network is designed to look for edges in order to learn the object. An explanation function is added that outputs the edges detected so that a human can see what data point the network is using to learn the object. A heat mapping function then displays what weights are the strongest in relation to identifying the image as a sailboat. Red points show the highest value weights and communicate to a human the key data points that the network used to make the identification.

An explainable AI example of sailboat detection

Explanation functions and heat mapping help developers determine if the neural net is tuned correctly to identify images. This technique can reveal some surprising results. For example, a team fed hundreds of car pictures into the network that had the same logo. The image below shows an example of a heat map (with the image overlaid) revealing that after analyzing hundreds of car pictures from the same manufacturer, the network keyed in on the logo to “cheat” in order to quickly identify the images as cars. Obviously, the network needs to be tuned to avoid this issue.

A car identified by the logo

In another example of “cheating,” a network was fed hundreds of tractor pictures. Many of these pictures contained identical link text. The system heat map (with the image overlaid) showed that the network focused on this link in order to identify the images as tractors.

A tractor identified by a link

The XAI field is a fairly recent field of research and the AI world has a long way to go in order to find solid implementations that can be applied to every system. The good news is that many solutions are under development in order to explain how neural networks learn.

Thomas Dewey

Thomas Dewey (BSEE) has over 20 years of electronic design automation (EDA) experience at Siemens EDA (formerly Mentor Graphics). He has held various engineering, technical, and marketing responsibilities at the company, supporting custom integrated circuit design and verification solutions. For the last 4 years, he has researched, consulted, and written about all aspects of artificial intelligence.

More from this author

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2021/10/12/ai-foundations-understanding-how-neural-networks-learn/