Designing Ultra Low Power AI Processors
Excerpt from article: “Designing Ultra Low Power AI Processors”
“What makes power such a challenge to get right in an application like a doorbell camera is if you look at the power envelope of the system, it’s looking at the images and identifying the patterns of the images, creating the network,” said Anoop Saha, market development manager at Siemens EDA. “It’s a lot of computation, and a lot of memory accesses. What happens in the market today is that the design team will take a card from Renesas or NXP or Arm or another provider. There is a simple card where there’s a CPU and some other things like a generic CPU, a generic compute element, and a generic memory element. There are some specific things built around AI, but most of the cards that exist in the market are general-purpose. That’s the only way those companies can make profit. These cards have to be general-purpose so they can apply to a wide variety of cases. However, because it’s general-purpose, it’s not specialized to a use case that a specific company is working on. That’s one of the reasons that the way the memory is organized, and the way the application accesses the memory, the power envelope is like 30 or 40 times more than what they would like it to be. It’s not like it would be if it were going into a data center, but it needs a constant power connection. Otherwise, the battery will die very soon. They are using a general-purpose chip for doing a very specific AI application, but it’s a different way of computing and different way of doing things, so that shoots up the power envelope of that chip. The more you specialize, the better it is.”
Read the entire article on SemiEngineering originally published on April 9th, 2020.