While attending an online AI class from MIT, my ears perked up when the discussion turned to how AI algorithms can sometimes get stuck when processing data. For example, if the algorithm is attempting to recognize an image, it might get trapped between whether there is a match to a particular feature or not. The technical phrase for this phenomena is “caught in a local minima.” At that point, the algorithm is basically stuck in a loop. The professor then mentioned that to avoid this situation, researchers periodically introduce noise into the input data so that the image pixels that the algorithm is analyzing are wildly different than expected and it could quickly assign that data as “no match” and then move on to the next set of pixels.
We have all experienced trying to solve a difficult problem during the day. No matter how hard we try, we cannot get to a solution. We are stuck in a loop. Then, in the middle of sleep, we suddenly awake with the answer! This is why some people sleep with paper and pen nearby. But, how did that solution suddenly pop into our heads?
Researchers are not 100% sure of the purpose of sleep. We all know that we die without it. But science has a theory as to one key benefit: shutting down the regions of the brain during sleep that analyze problems allows us to calmly reexamine facts and solutions of the day, fire up different brain regions, store anything unexpected into memory, and sometimes come up with a solution by taking a new look at unexpected data interactions. If we are lucky enough to awake when this happens, we can write down the answer. In other words, we compare unexpected data to solutions and come up with new ideas as a result. Which takes us back to that MIT lecture.
If the goal of AI is to replicate what the human brain can do, then maybe we should be looking at the idea of modeling sleep. Sure, an AI system can run 24/7 without rest, but should it? According to a team at the Los Alamos National Laboratory, their AI systems became unstable during non-stop continuous learning. They too wondered if AI needs to “sleep” in order optimize the system.
At Los Alamos, the team is trying to mimic the biological human brain. They use spiking neural networks to model biological neural networks in our brains. While these spiking networks have many advantages, such as low power consumption with fast results, they do become unstable over time. In an attempt to find a way to stabilize their system, the team turned to the idea of mimicking sleep.
To simulate sleep, the researchers decided to inject noise into the AI system, which sounds very much like that MIT lecture. Of course they tried many types of noise and measured the results. But, it turns out that Gaussian noise works the best. Why? The vast range of amplitudes and frequencies seems to be similar to the input our neurons receive during slow-wave sleep (SWS). SWS is the deepest level of sleep before the REM phase and researchers believe this is the sleep phase when we store memories.
When I first saw the original 1982 Blade Runner movie, I was floored. So many AI concepts were explored against that futuristic and dark background. I later found that this movie was based on the Phillip K. Dick book published in 1968 called “Do Androids Dream of Electric Sheep.” There is no wonder Philip has a very prestigious award in science fiction named after him, because look how far ahead of his time he was.
If you want to see all the math, search for the research paper succinctly titled: “Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model.”