Power Is Limiting Machine Learning Deployments

By nileshthiagarajan

Excerpt from article: “Power Is Limiting Machine Learning Deployments

The total amount of power consumed for machine learning tasks is staggering. Until a few years ago we did not have computers powerful enough to run many of the algorithms, but the repurposing of the GPU gave the industry the horsepower that it needed.

The problem is that the GPU is not well suited to the task, and most of the power consumed is waste. While machine learning has provided many benefits, much bigger gains will come from pushing machine learning to the edge. To get there, power must be addressed.

“You read about how datacenters may consume 5% of the energy today,” says Ron Lowman, product marketing manager for Artificial Intelligence at Synopsys. “This may move to over 20% or even as high as 40%. There is a dramatic reason to reduce chipset power consumption for the datacenter or to move it to the edge.”

Learning is compute-intensive. “There are two parts to learning,” says Mike Fingeroff, high-level synthesis technologist at Siemens EDA. “First, the training includes running the feed-forward (inference engine) part of the network. Then, the back-propagation of the error to adjust the weights uses gradient descent algorithms that require massive amounts of matrix manipulations.”

Read the entire article on SemiEngineering originally published on July 16th, 2019.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at