{"id":9604,"date":"2013-07-29T06:59:05","date_gmt":"2013-07-29T13:59:05","guid":{"rendered":"https:\/\/blogs.mentor.com\/verificationhorizons\/?p=9604"},"modified":"2026-03-27T08:34:57","modified_gmt":"2026-03-27T12:34:57","slug":"part-7-the-2012-wilson-research-group-functional-verification-study","status":"publish","type":"post","link":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/2013\/07\/29\/part-7-the-2012-wilson-research-group-functional-verification-study\/","title":{"rendered":"Part 7: The 2012 Wilson Research Group Functional Verification Study"},"content":{"rendered":"<h3>Testbench Characteristics and Simulation Strategies<\/h3>\n<p>This blog is a continuation of a series of blogs that present the highlights from the 2012 Wilson Research Group Functional Verification Study (<a href=\"https:\/\/blogs.mentor.com\/verificationhorizons\/blog\/2013\/04\/23\/prologue-the-2012-wilson-research-group-functional-verification-study\/\" target=\"_blank\" rel=\"noopener\">for background on the study, click here<\/a>).<\/p>\n<p>In my previous blog (<a href=\"https:\/\/blogs.mentor.com\/verificationhorizons\/blog\/2013\/07\/22\/part-6-the-2012-wilson-research-group-functional-verification-study\/\" target=\"_blank\" rel=\"noopener\">click here<\/a>), I focused on the controversial topic of effort spent in verification. In this blog, I focus on some of the 2012 Wilson Research Group findings related to testbench characteristics and simulation strategies. Although I am shifting the focus away from verification effort, I believe that the data I present in this blog is related to my previous blog and really needs to be considered when calculating effort.<\/p>\n<h3>Time Spent in full-chip versus Subsystem-Level Simulation<\/h3>\n<p>Let\u2019s begin by looking at Figure 1, which shows the percentage of time (on average) that a project spends in full-chip or SoC integration-level verification versus subsystem and IP block-level verification.<strong> <\/strong>The mean time performing full chip verification is represented by the dark green bar, while the mean time performing subsystem verification is represented by the light green bar. Keep in mind that this graph represents the industry average. Some projects spend more time in full-chip verification, while other projects spend less time.<\/p>\n<h3 align=\"center\"><a href=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-1.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-9608\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-1-520x390.gif\" alt=\"\" width=\"520\" height=\"390\" \/><\/a><strong>Figure 1. Mean time spent in full chip versus subsystem simulation<\/strong><\/h3>\n<h3>Number of Tests Created to Verify the Design in Simulation<\/h3>\n<p>Next, let\u2019s look at Figure 2, which shows the number of tests various projects create to verify their designs using simulation. The graph represents the findings from the 2007 Far West Research study (in gray), the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). Note that the curves look remarkably similar over the past five years. The median number of tests created to verify the design is within the range of (&gt;200 \u2013 500) tests. It is interesting to see a sharp percentage increase in the number of participants who claimed that fewer tests (1 \u2013 100) were created to verify a design in 2012. It\u2019s hard to determine exactly why this was the case\u2014perhaps it is due to the increased use of constrained random (which I will talk about shortly). Or perhaps there has been an increased use of legacy tests. The study was not design to go deeper into this issue and try to uncover the root cause. This is something I intend to informally study this next year through discussions with various industry thought leaders.<\/p>\n<p align=\"center\"><a href=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-2.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-9612\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-2-520x390.gif\" alt=\"\" width=\"520\" height=\"390\" \/><\/a><\/p>\n<p align=\"center\"><strong>Figure 2. Number of tests created to verify a design in simulation<\/strong><\/p>\n<h3>Percentage of Directed Tests versus Constrained-Random Tests<\/h3>\n<p>Now let\u2019s compare the percentage of directed testing that is performed on a project to the percentage of constrained-random testing. Of course, in reality there is a wide range in the amount of directed and constrained-random testing that is actually performed on various projects. For example, some projects spend all of their time doing directed testing, while other projects combine techniques and spend part of their time doing directed testing\u2014and the other part doing constrained-random. For our comparison, we will look at the industry average, as shown in Figure 3. The average percentage of tests that were directed is represented by the dark green bar, while the average percentage of tests that are constrained-random is represented by the light green bar.<\/p>\n<p align=\"center\"><strong><a href=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-3a.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-9664\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-3a-520x390.gif\" alt=\"\" width=\"520\" height=\"390\" \/><\/a><\/strong><\/p>\n<p align=\"center\"><strong>Figure 3. Mean directed versus constrained-random testing performed on a project<\/strong><\/p>\n<p>Notice how the percentage mix of directed versus constrained-random testing has changed over the past two years.Today we see that, on average, a project performs more constrained-random simulation. In fact, between 2010 and 2012 there has been a 39 percent increase in the use of constrained-random simulation on a project. One driving force behind this increase has been the maturing and acceptance of both the SystemVerilog and UVM standards\u2014since two standards facilitate an easier implementation of a constrained-random testbench. In addition, today we find that an entire ecosystem has emerged around both the SystemVerilog and UVM standards. This ecosystem consists of tools, verification IP, and industry expertise, such as consulting and training.<\/p>\n<p>Nonetheless, even with the increased adoption of constrained-random simulation on a project, you will find that constrained-random simulation is generally only performed at the IP block or subsystem level. For the full SoC level simulation, directed testing and processor-driven verification are the prominent simulation-based techniques in use today.<\/p>\n<h3>Simulation Regression Time<\/h3>\n<p>Now let\u2019s look at the time that\u00a0various projects spend in a simulation regression. Figure\u00a04 shows the trends in terms of simulation regression time by comparing the 2007 Far West Research study (in gray) with the 2010 Wilson Research Group study (in blue), and the 2012 Wilson Research Group study (in green). There really hasn\u2019t been a significant change in the time spent in a simulation regression within the past three years. You will find that some teams spend days or even weeks in a regression. Yet today, the industry median is between 8 and 16 hours, and for many projects, there has been a decrease in regression time over the past few years. Of course, this is another example of where deeper analysis is required to truly understand what is going on. To begin with, these questions should probably be refined to better understand simulation times related to IP versus SoC integration-level regressions. We will likely do that in future studies\u2014with the understanding that we will not be able to show trends (or at least not initially).<\/p>\n<p align=\"center\"><a href=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-4.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-9620\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2013\/07\/Fig-7-4-520x390.gif\" alt=\"\" width=\"520\" height=\"390\" \/><\/a><strong>Figure 4. Simulation regression time trends<\/strong><\/p>\n<p>In my next blog (click <a href=\"https:\/\/blogs.mentor.com\/verificationhorizons\/blog\/2013\/08\/05\/part-8-the-2012-wilson-research-group-functional-verification-study\/\" target=\"_blank\" rel=\"noopener\">here<\/a>), I\u2019ll focus on design and verification language trends, as identified by the 2012 Wilson Research Group study.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Testbench Characteristics and Simulation Strategies This blog is a continuation of a series of blogs that present the highlights from&#8230;<\/p>\n","protected":false},"author":71592,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spanish_translation":"","french_translation":"","german_translation":"","italian_translation":"","polish_translation":"","japanese_translation":"","chinese_translation":"","footnotes":""},"categories":[1],"tags":[313,326,493,506,528,533,718,751,758,787,819],"industry":[],"product":[],"coauthors":[],"class_list":["post-9604","post","type-post","status-publish","format-standard","hentry","category-news","tag-313","tag-accellera","tag-formal-verification","tag-functional-verification","tag-ieee","tag-ieee-1800","tag-simulation","tag-systemverilog","tag-testbench","tag-uvm","tag-verification"],"_links":{"self":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/9604","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/users\/71592"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/comments?post=9604"}],"version-history":[{"count":1,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/9604\/revisions"}],"predecessor-version":[{"id":19756,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/9604\/revisions\/19756"}],"wp:attachment":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/media?parent=9604"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/categories?post=9604"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/tags?post=9604"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/industry?post=9604"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/product?post=9604"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/coauthors?post=9604"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}