Stanford University: Edge Machine Learning DNN Accelerator SoC Design Using Catapult HLS | Webinar

By Mathilde Karsenti

This webinar describes the Edge Machine Learning Accelerator SoC design and verification of the systolic array-based DNN accelerator taped out by Stanford, the performance optimizations of the accelerator, and the integration of the accelerator into an SoC. Kartik Prabhu, PhD student in Electrical Engineering from Stanford University presents on their project and their experience using High-Level Synthesis (HLS).

HLS offers a fast path from specification to physical design ready RTL by enabling a very high design and verification productivity. HLS handles several lower-level implementation details, including scheduling and pipelining, which allows designers to work at a higher level of abstraction on the more important architectural details. Designing at the C++ level allows for rapid iterations, thanks to faster simulations and easier debugging, and the ability to quickly explore different architectures. HLS is a perfect match for designing Deep Neural Network (DNN) accelerators, given its ability to automatically generate the complex control logic that is often needed. This webinar will describe the design and verification of the systolic array-based DNN accelerator taped out by our group, the performance optimizations of the accelerator, and the integration of the accelerator into an SoC. Our accelerator achieves 2.2 TOPS/W and performs ResNet-18 inference in 60 ms and 8.1 mJ.  

What you will Learn

  • Effective use of HLS for ML accelerator design
  • Analyzing performance and optimizations
  • Integrating an HLS design into an SoC

Who Should Watch

  • System architects and RTL/HW designers interested in using HLS for ML accelerators

View the webinar and slides

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at