Excerpt from article: “Edge Complexity To Grow For 5G“
Edge computing is becoming as critical to the success of 5G as millimeter-wave technology will be to the success of the edge. In fact, it increasingly looks as if neither will succeed without the other.
5G networks won’t be able to meet 3GPP’s 4-millisecond-latency rule without some layer to deliver the data, run the applications and broker the complexities of multi-tier Internet apps across an unpredictable array of intelligent devices. And the edge, which initially was developed as a way for IoT managers to retain control of their data, will not function without ultra-fast wireless communications.
Investments in both of these areas are growing, and so are the stakes for making this all work. The need to send answers back from remote cloud apps to end users fast enough to stop cars from crashing into each other is probably still be a stretch. But moving the cloud closer to the data source, and prioritizing the kinds and amount of data that requires an immediate response, is an increasingly important trend. In fact, those factors are beginning to alter chip design as the entire industry begins sorting out which architectures work best for which applications.
“It all depends on what kind of functionality is needed by the device,” said Nimish Modi, senior vice president of marketing and business development at Cadence. “If it’s a car, there may be a ping-pong kind of communication as a car is traversing a road. But the infrastructure capabilities of the edge will determine what is the functionality of these things. The level of compute is increasing tremendously. The amount of data that is being generated at the edge is growing every day, and the signal-to-noise ratio is not very high. There is a whole bunch of data that is useless. But there also is the stuff that’s important and which requires fast localized decision-making—and it needs to be secure. And then there is edge storage and 5G, which is going to be prevalent. It’s a system-level capability that’s needed at the edge.”
Those layers are requirements are beginning to impact what kind of hardware is used where, and for which application.
“A CPU is too slow and a GPU uses too much power,” said Ellie Burns, director of marketing for digital design implementation solutions at Siemens EDA. “This is why we’re seeing more TPUs. An all-purpose generic solution uses too much power and it’s too expensive, because moving data around takes a huge amount of memory. The future will include little arrays doing calculations and implementing of algorithms on ASICs for better power and performance.”
Read the entire article on SemiEngineering originally published on July 2nd, 2019.