New to the DAC program in 2018, machine learning, artificial intelligence, and deep learning are revolutionizing today’s technological landscape. The ability of machines to observe and learn from data is key to enabling many of tomorrow’s most exciting technologies such as fully autonomous vehicles and personalized medical care. At the 55th Design Automation Conference (DAC), Mentor’s experts and partners will be discussing the unique challenges of designing machine learning and artificial intelligence systems, as well as the various applications machine learning and artificial intelligence stand to benefit. Take a look below for a summary of our sessions.
FEATURED CONFERENCE SESSIONS
Tutorial 8: Machine learning for EDA applications
Mon, June 25 from 1:00pm – 5:00pm | Room 3020
A low-latency and platform independent neuronal networking design
Tue, June 26 from 3:30pm – 5:00pm | Room 2010
Accelerating functional coverage analysis using clustering machine learning techniques
Wed, June 27 from 1:30pm – 3:00pm | Room 2012
NVIDIA: Design and Verification of a Machine Learning Accelerator SoC Using an Object-Oriented HLS-Based Design Flow
A high-productivity digital VLSI flow for designing complex SoCs is presented. It includes High-Level Synthesis tools, an efficient implementation of Latency-Insensitive Channels, and MatchLib – an object-oriented library of synthesizable SystemC and C++ components. The flow was demonstrated on a programmable machine learning inference accelerator SoC designed in 16nm FinFET technology.
FotoNation: A Designer Life with HLS – Faster Computer Vision/Neural Networks
As a company with a history of innovation and experience in computational imaging algorithms, FotoNation is always looking for ways to speed up their development processes. This presentation will discuss two aspects of their experience going from RTL to HLS. The first topic is using HLS for algorithms such as Face Detection that they know well with RTL for comparison. The second is to use HLS to develop new Neural Network accelerators and how HLS could help them get from algorithm to critical FPGA demonstrators in a time which would not be possible with a traditional RTL flow.
HLS for Custom Neural Network Inference in FPGA/ASIC
Neural networks are developed and trained using high-performance floating-point compute environments. However, the inference implementation can be reduced to meet power and real-time requirements with HW accelerators. General-purpose accelerators may not be optimal for power and performance where HLS enables designers to quickly achieve exactly what is needed. This session steps through a CNN (Convolutional Neural Network) inference implementation, highlights architectural choices, and shows how HLS can be used to rapidly design custom CNN accelerators.
HLS to the Rescue for Computer Vision and Deep Learning
The algorithms, to teach a computer to see, understand and make decisions, require a significant amount of parallel compute performance at the lowest possible power. HLS (High-Level Synthesis) is now finding itself in accelerated adoption for computer vision applications due to some unique capabilities and market drivers. This session presents an HLS introduction, what real customers are able to achieve with it, and why it is such a good fit to accelerate the delivery of high-performance, low-power hardware from rapidly changing algorithms and neural networks.
AMS Verification Methodology for GPUs in AI and Deep Learning Applications
Artificial Intelligence and Deep Learning based applications are the main drivers behind the current exponential increase in demand for computational power. This demand is predominantly being addressed now by GPUs instead of traditional CPUs. GPUs are high-performance, high-throughput chips which require I/O bandwidth of the order of Gbps and high-bandwidth memory interfaces. This talk will provide an overview of the AI and deep learning applications using GPUs, the added complexities on AMS verification and the methodology used to address these verification challenges in efficient and predictable ways.
Mentor ML Characterization Analytics Demo
ML Characterization Analytics provides signoff validation for characterized foundation IP, memories and custom/analog blocks. With Analytics, library producers and users both can validate libraries quickly and effectively using machine learning-based analysis and visualization. Library issues potentially leading to design delays or chip failure can now be identified and rectified within hours instead of weeks. In this live demo, we will show key library validation steps, and use the Analytics Explorer GUI to visually debug the source of library issues. We will demonstrate how Trend Analysis locates hard-to-find characterization errors by helping users identify outliers and incorrect trends in characterized values. We will also show how differences between library revision updates can be visualized in a clean and simple manner using the tool.