{"id":13714,"date":"2019-03-08T12:20:42","date_gmt":"2019-03-08T19:20:42","guid":{"rendered":"https:\/\/blogs.mentor.com\/verificationhorizons\/?p=13714"},"modified":"2026-03-27T08:40:43","modified_gmt":"2026-03-27T12:40:43","slug":"cats-coverage","status":"publish","type":"post","link":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/2019\/03\/08\/cats-coverage\/","title":{"rendered":"Cats != Coverage"},"content":{"rendered":"<blockquote><p>\u201cAny sufficiently advanced technology is indistinguishable from magic.\u201d<\/p>\n<p>&#8211; Arthur C. Clarke, <em>Profiles of The Future<\/em><\/p>\n<p>\u201cWe actually made a map of the country, on the scale of a mile to the mile!\u201d<br \/>\n\u201cHave you used it much?\u201d I enquired.<br \/>\n\u201cIt has never been spread out, yet,&#8221; said Mein Herr. &#8220;The farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.\u201d<\/p>\n<p>&#8211; Lewis Carroll, <em>Sylvie and Bruno Concluded<\/em><\/p>\n<\/blockquote>\n<p>For about as long as Functional Coverage has been \u201ca thing,\u201d there has been the alluring vision of a magic system where you could write a testbench that would randomly stimulate your design, check your coverage and automatically adjust the stimulus constraints to target the remaining coverage holes. Run such a system through a few loops and voil\u00e0! \u2013 you\u2019ve magically reached your coverage goals! In fact, there were a few papers presented at DVCon US last week that dealt with this very topic, and it even came up during a panel on the impact of Deep Learning on Verification. One might be tempted to think that Deep Learning has become Verification\u2019s version of Arthur C. Clarke\u2019s sufficiently advanced technology that is indistinguishable from magic. As much as I hate to be the bearer of bad news, it\u2019s not going to happen \u2013 at least not anytime soon.<\/p>\n<p>A closer examination of the papers that discussed this topic show that they only work when there is a direct correlation between the coverage points and the possible stimulus values being generated. In such an environment, it is indeed possible to randomize the stimulus, track the values generated and narrow the constraints so that the next randomization eliminates the already-covered values from consideration. While this sounds great, you still have to randomize the values every cycle and narrowing the constraints actually forces the solver to work harder each time. If your goal is to maximize coverage in the fewest number of tests without wasting time, you should definitely check out using Portable Stimulus, with a tool like Questa inFact\u00ae instead, since it uses the coverage goals to automatically generate the minimal set of tests that is guaranteed to hit your coverage.<\/p>\n<p>I took advantage of the concentration of old friends and experts at DVCon US to spend some time with John Aynsley (who is both) talking about this very topic. John has been studying Deep Learning for quite some time and has shared his thoughts on this intriguing topic in standing-room-only workshops at the last two DVCons. The problem is that DL requires that there be some measurable, quantifiable and ultimately predictable relationship between the stimulus and the coverage, and it is a much more difficult \u2013 ultimately intractable \u2013 problem than recognizing pictures of cats. Instead of trying to recognize a pattern similar to one you\u2019ve already seen, what coverage closure is trying to achieve is to determine what stimulus must be applied to hit coverage points that have never been hit before. It\u2019s not unlike asking a neural network that was trained to recognize cats to recognize cars.<\/p>\n<p>As I mentioned, it can work when the coverage points match the input values, but if you\u2019re trying to establish a correlation between input stimulus and functional coverage metrics about the inner workings of a state machine deeply-embedded in your design, it simply can\u2019t. Deep Learning requires a \u201ccost function\u201d that can be evaluated and minimized to achieve the \u201clearning.\u201d Modern complex designs simply do not have such a cost function that can be predictably evaluated. The best you could do would be to use a reference model to evaluate the cost function of a given stimulus sequence, but for more than the most trivial coverage, you\u2019d need a \u201creference model\u201d that is essentially the design itself. And then you\u2019re looking at Lewis Carroll\u2019s map with a \u201cscale of a mile to the mile.\u201d You could try to use it, but the farmers would object.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u201cAny sufficiently advanced technology is indistinguishable from magic.\u201d &#8211; Arthur C. Clarke, Profiles of The Future \u201cWe actually made a&#8230;<\/p>\n","protected":false},"author":71936,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spanish_translation":"","french_translation":"","german_translation":"","italian_translation":"","polish_translation":"","japanese_translation":"","chinese_translation":"","footnotes":""},"categories":[1],"tags":[424,504,580,638],"industry":[],"product":[],"coauthors":[],"class_list":["post-13714","post","type-post","status-publish","format-standard","hentry","category-news","tag-deep-learning","tag-functional-coverage","tag-machine-learning","tag-portable-stimulus"],"_links":{"self":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/13714","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/users\/71936"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/comments?post=13714"}],"version-history":[{"count":1,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/13714\/revisions"}],"predecessor-version":[{"id":14502,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/13714\/revisions\/14502"}],"wp:attachment":[{"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/media?parent=13714"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/categories?post=13714"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/tags?post=13714"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/industry?post=13714"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/product?post=13714"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blogs.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/coauthors?post=13714"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}