Applications of AI/ML in Functional Verification
By Yunhong Min, Support Application Engineer, DVT
Artificial Intelligence (AI) and Machine Learning (ML) are transforming industries worldwide, including Electronic Design Automation (EDA). In Functional Verification (FV), these technologies offer promising solutions to enhance efficiency, accuracy, and automation. This post explores how AI/ML reshapes FV workflows, their potential applications, and the challenges ahead.
The Need for AI/ML in Functional Verification
Functional Verification remains a time-intensive process for semiconductor designers, often consuming nearly half of their design efforts [1]. By integrating ML, designers can automate repetitive tasks, improve regression management, and accelerate coverage closure, delivering higher-quality results in less time. This shift is pivotal as design complexity continues to grow.
Key Applications of AI/ML in Functional Verification
AI/ML applications in FV span several critical areas, including:
- Requirement Engineering: Translating Natural Language (NL) specifications into SystemVerilog Assertions (SVA) or other verification languages, as well as automated code generation directly from design specifications.
- Coverage Closure: ML-guided random test generation to ensure comprehensive test coverage.
- Verification Acceleration: Automated selection of the most efficient formal proof engines to streamline Formal Verification.
- Bug Detection: ML-assisted root cause analysis, enabling faster debugging and issue resolution.
- Regression Management: Automated identification of high-priority test cases and prediction of failure scenarios.
For a more detailed overview, see the survey by Yu et al. [2].
Insights from LLM Applications in Functional Verification
Recent studies demonstrate the potential of Large Language Models (LLMs) in advancing FV tasks:
- Assertion Generation: Studies show that LLMs can generate SystemVerilog Assertions (SVA) from NL, achieving a 9.29% success rate. 80% of the correct assertions were generated under optimal conditions [5].
- Coverage Prediction: In Python-based studies, LLMs can predict code coverage, achieving accuracy rates of 20-30% at the line level and 84-90% at the statement level. Performance on HDL remains a work in progress due to limited training data [6].
- Bug Reproduction: Feeding LLMs before failed test reports allows them to reproduce approximately 33.5% of all reported bugs [7].
- Test Stimulus Generation: LLMs have achieved up to 98.94% coverage on simpler designs, though results taper to 86.19% on more complex designs [8].
- Code Generation: LLMs trained on open-source repositories have generated Verilog code with success rates of 59.9% to 98.7%, depending on the problem’s complexity [9].
For further technical details, consult Yu et al.’s recent study on LLM paradigms in verification [3].
Challenges to Overcome
Despite their potential, AI/ML technologies in FV face several hurdles:
- Limited Datasets: The scarcity of open-source HDL datasets hampers model training, leading to inaccuracies or “hallucinations.”
- Scalability: AI/ML models that perform well on smaller designs often need help maintaining efficiency and accuracy when applied to large-scale projects.
Addressing these challenges requires a collaborative industry effort to create robust datasets and refine model scalability.
Conclusion
AI/ML technologies pave the way for a more efficient and automated approach to Functional Verification. From translating specifications to debugging and coverage closure, these tools promise to revolutionize workflows and address the ever-increasing complexity of modern semiconductor designs. As adoption grows, it will be exciting to witness the evolution of verification processes powered by these cutting-edge technologies.
For more insights into Siemens EDA solutions, explore our Functional Verification resources.
References
[1] Harry Foster, “2022 Functional Verification Study” (2022)
[2] Dan Yu, Harry Foster, and Tome Fitzpatrick, “A Survey of Machine Learning Applications in Functional Verification” (2023)
[3] Dan Yu, Tom Fitzpatrick, Waseem Raslan, Harry Foster, and Eman El Mandouh, “Paradigms of Large Language Model Applications in Functional Verification” (2024)
[4] Dan Yu, “Verification Data Analytics with Machine Learning” (2023)
[5] Rahul Kande, et al., “LLM-Assisted Generation of Hardware Assertions” (2023)
[6] Michele Tufano et al., “Predicting Code Coverage without Execution” (2023)
[7] Sungmin Kang et al., “Large Language Models Are Few-Shot Testers: Exploring LLM-Based General Bug Reproduction” (2023)
[8] Zixi Zhang et al., “LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation” (2023)
[9] Baptiste Rozière, “Code Llama: Open Foundation Models for Code” (2023)