{"id":236,"date":"2019-07-11T19:43:52","date_gmt":"2019-07-12T02:43:52","guid":{"rendered":"https:\/\/blogs.plm.automation.siemens.com\/t5\/Digital-Transformations\/HLS-verifies-artificial-intelligence-for-ADAS-in-autonomous-cars\/ba-p\/608942"},"modified":"2026-03-26T12:04:26","modified_gmt":"2026-03-26T16:04:26","slug":"hls-verifies-artificial-intelligence-for-adas-in-autonomous-cars","status":"publish","type":"post","link":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/hls-verifies-artificial-intelligence-for-adas-in-autonomous-cars\/","title":{"rendered":"HLS verifies artificial intelligence for ADAS in autonomous cars"},"content":{"rendered":"<p><P>Artificial Intelligence (AI) involves the use of neural networks and you might have heard of terms like CNNs, DCNs, GANs, SNNs and wonder what they mean. Fundamentally, they are nothing but a set of algorithms that are designed to recognize patterns. These algorithms help cluster and classify huge volumes of data.<\/P><\/p>\n<p><P>Here are a few popular neural networks in the AI space; each has at least some relevance to work on ADAS and automated driving applications:<\/P><\/p>\n<p><P><U>Recurrent Neural Networks (RNNs<\/U>): Connections between nodes for a directed graph along a temporal sequence \u2013 for pattern recognition such as handwriting or speech recognition<\/P><\/p>\n<p><P><U>Deep Convolution Network (DCNs):<\/U> Analogous to the connectivity pattern of neurons in the human brain, specifically inspired by the visual cortex \u2013 for pattern recognition for image, video and natural language processing<\/P><\/p>\n<p><P><U>Generative Adversarial Network (GANs):<\/U> Two neural networks contesting with each other in a game theory exercise \u2013 used to generate photorealistic images<\/P><\/p>\n<p><P><U>Spiking Neural Network (SNNs): <\/U>Mimics natural neural networks and includes the concept of time along with neuronal and synaptic states \u2013 for information processing and modeling of the central nervous system<\/P><\/p>\n<p><P>Beyond AI, driving without human intervention requires a sophisticated framework of image sensors. These sensors include LiDAR, Radar and cameras, which together continuously generate a high volume of data in real time about the environment surrounding the car. Neural networks help the car create meaningful information about this data and react in real-time.<\/P><\/p>\n<p><P>Implementing machine learning (ML) algorithms in hardware is challenging. To achieve accuracy, an inference chip for autonomous vehicle needs to address the following challenges:<\/P><\/p>\n<p><OL><br \/>\n<LI><U>Performance:<\/U> A single high definition camera can capture a 1920&#215;1080 image at 60 frames per second; a car can have 10 or more such cameras<\/LI><br \/>\n<LI><U>Power:<\/U> AI inference can a massively power-intensive operation, especially because of the high volume of accesses to remote memories<\/LI><br \/>\n<LI><U>Functional Safety:<\/U> Safety is the overarching and comprehensive requirement, so the need to detect functional safety issues that might creep in because of various hardware faults can also command significant compute resources<\/LI><br \/>\n<\/OL><br \/>\n<P>The biggest challenge in creating such a chip is the turnaround time for the traditional ASIC design flow. It takes somewhere from several months to a year to implement a new ASIC hardware.<\/P><br \/>\n<P>My colleague, David Fritz, talks through the basic steps in creating an inferencing engine in <a href=\"https:\/\/semiengineering.com\/building-an-efficient-inferencing-engine-in-a-car\/\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">this interview<\/A> with Semiconductor Engineering editor Ed Sperling.<\/P><\/p>\n<p><P><\/p>\n<div class=\"video-embed-center video-embed\"><iframe loading=\"lazy\" class=\"embedly-embed\" src=\"https:\/\/cdn.embedly.com\/widgets\/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FmbmMuNQ_evk%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DmbmMuNQ_evk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FmbmMuNQ_evk%2Fhqdefault.jpg&amp;key=fad07bfa4bd747d3bdea27e17b533c0e&amp;type=text%2Fhtml&amp;schema=youtube\" width=\"600\" height=\"337\" scrolling=\"no\" frameborder=\"0\" allow=\"autoplay; fullscreen\" allowfullscreen=\"true\" title=\"Video\"><\/iframe><\/div>\n<p><\/P><\/p>\n<p><P>Initially, an autonomous system architect or designer relies on tools like TensorFlow, Caffe, MATLAB and Theano to aid in capturing, collecting and categorically verifying data in a high-level abstract environment. These high-level deep learning frameworks allow exploration of a multitude of parameters to explore, analyze and select the optimal solution for the algorithm.<\/P><\/p>\n<p><P>Once the algorithm is determined, the designer then captures the flow in C++ or SystemC. The next step is to start designing the actual hardware algorithmic block for autonomous applications. The most efficient way is using High-Level Synthesis (HLS) to generate RTL from C++ or SystemC.<\/P><\/p>\n<p><P>HLS has in fact been around for decades with various input specification languages used in the high-level specification of a given problem, decoupled from clock-level timing. Indeed the basic idea for HLS, which has found new life with the rise of AI, is to separate functionality from implementation with powerful capabilities for targeting and implementation at any time. As a result, HLS accelerates algorithmic design time with a higher level of abstraction resulting in 50x less code than RTL. That means smaller design teams, shorter development time and faster verification.<\/P><\/p>\n<p><P>The next is step is verification, which includes formal property checking and linting as part of this flow to ensure that the source code is \u201cclean\u201d for both synthesis and simulation. Tools are also required that can measure code coverage, including line, branch, and expression coverage. The goal is to achieve RTL that is correct by construction by precise consistency of representation and simulation results between the C++ algorithm and synthesized RTL.<\/P><\/p>\n<p><P>The Catapult\u00ae HLS Platform and PowerPro\u00ae solutions from Siemens is the industry\u2019s leading HLS platform. Catapult empowers designers to use industry-standard ANSI C++ and SystemC to describe functional intent and to move up to a more productive abstraction level. The Catapult Platform provides a powerful combination of high-level synthesis paired with PowerPro for measurement, exploration, analysis, and optimization of RTL power and verification infrastructure for seamless verification of C++ and RTL. Indeed HLS is ultimately part of a much larger verification story, on that requires simulation of various automated vehicles systems, from the chip level on up through various vehicle subsystems and even beyond the vehicle to simulated urban environments.<\/P><\/p>\n<p><P>Our PAVE360 program, announced in May 2019, is among our prominent recent examples of this vision, extending digital twin simulation far beyond processors and allowing secure, multi-supplier collaboration in a way that\u2019s both unique in the market and essential in advancing AV technology. For more on PAVE360, see this TIRIAS Research <a href=\"https:\/\/www.tiriasresearch.com\/downloads\/av-simulation-extends-to-silicon\/\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">whitepaper<\/A>, written by Jim McGregor. (Jim and David appeared as guests on <a href=\"http:\/\/www.autonocast.com\/blog\/2019\/5\/22\/144-david-fritz-and-jim-mcgregor-on-chips-for-autonomous-vehicles\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">this episode<\/A> of Autonocast, as well, to talk about PAVE360; Catapult comes up briefly, as well).<\/P><\/p>\n<p><P>And to learn more about HLS, please download our whitepaper <a href=\"https:\/\/go.mentor.com\/57l_k\" target=\"_blank\" rel=\"noopener nofollow noreferrer\">High-Level Synthesis for Autonomous Drive<\/A>. Find out how algorithmic-intensive designs for autonomous vehicles are a perfect fit for HLS \u2013 and how the methodology has been successfully adopted by major semiconductor suppliers like Bosch, STMicroelectronics, and Chips&amp;Media in the automotive space. Nothing abstract about names like that.<\/P><\/p>\n<p><P><STRONG>About the author<SPAN>&nbsp;<\/SPAN><\/STRONG><BR \/><STRONG>Andrew Macleod<\/STRONG><EM><SPAN>&nbsp;<\/SPAN>is the director of automotive marketing at Siemens, focusing on the Mentor product suite. He has more than 15 years of experience in the automotive software and semiconductor industry, with expertise in new product development and introduction, automotive integrated circuit product management and global strategy, including a focus on the Chinese auto industry. He earned a 1st class honors engineering degree from the University of Paisley in the UK and lives in Austin, Texas. Follow him on Twitter<SPAN>&nbsp;<\/SPAN><a href=\"https:\/\/twitter.com\/AndyMacleod_MG\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">@AndyMacleod_MG<\/A>.<\/EM><\/P><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence (AI) involves the use of neural networks and you might have heard of terms like CNNs, DCNs, GANs, SNNs and wonder what they mean. Fundamentally, they are nothing but a set of &#8230;<\/p>\n","protected":false},"author":19245,"featured_media":237,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spanish_translation":"","french_translation":"","german_translation":"","italian_translation":"","polish_translation":"","japanese_translation":"","chinese_translation":"","footnotes":""},"categories":[1],"tags":[12,21,25,15],"industry":[120],"product":[],"coauthors":[],"class_list":["post-236","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-artificial-intelligence","tag-autonomous-vehicles","tag-cars-of-the-future","tag-innovating-design","industry-automotive-transportation"],"featured_image_url":"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/19\/2019\/09\/autonomous-vehicle-375.jpg","_links":{"self":[{"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/posts\/236","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/users\/19245"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/comments?post=236"}],"version-history":[{"count":1,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/posts\/236\/revisions"}],"predecessor-version":[{"id":238,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/posts\/236\/revisions\/238"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/media\/237"}],"wp:attachment":[{"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/media?parent=236"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/categories?post=236"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/tags?post=236"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/industry?post=236"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/product?post=236"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/thought-leadership\/wp-json\/wp\/v2\/coauthors?post=236"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}