Excerpt from article: “Power And Performance Optimization At 7/5/3nm“
At the edge, where people are building these new AI processors, it’s going through the same thing that you see with CPUs where people need hardware accelerators. They’re having to build custom hardware to save energy, save power, as you do with any CPU or GPU or whatever processor they’re using. But the real question is, ‘How does the data move?’ You’ve got these huge chips. You need to move all of this data around the chip in an efficient way — in a way that doesn’t burn all your energy or power. There are hundreds of architectures to choose from. With AI, there’s so much research going on that it’s hard to keep up. You couldn’t even read all of the research papers today to know what’s the best architecture. And so most teams are starting out not knowing if they’re going to be able to finish. With high-level synthesis, which is where I’m getting brought in, the teams have realized they’re going to have to build and test something, and then build it again to get it right.Bryan Bowyer, director of engineering at Siemens EDA
Read the entire article on SemiEngineering originally published on August 13th, 2020.