Excerpt from article: “Inference Moves To The Network”
Interviews with different players reveal three distinct categories of inference between the cloud and the edge. Those are:
-Inference done for the sake of the network itself;
-Offloading inference from an edge device to a smartphone; and
-Providing inference as a service for other applications somewhere within the network.
Network operation has become increasingly complex as operators try to maximize the utilization of their bandwidth and live up to their quality-of-service (QoS) agreements and security obligations. “The use of AI in improving infrastructure is huge,” said Anoop Saha, market development manager at Siemens EDA. This is particularly true in the case of the new 5G networks being rolled out. Advanced 5G capabilities require significant predictive analytics, like determining where to focus a beam using the new massive MIMO capabilities. Server chips capable of inference are starting to appear in base stations.
Read the entire article on SemiEngineering originally published on May 7th, 2020.