Thought Leadership

Evolution is a tinkerer

I was recently quoted in an EDA DesignLine blog as saying that “it is a myth that ABV is a mainstream technology.” Actually, the original quote comes from an extended abstract I wrote for an invited tutorial at Computer-Aided Engineering (CAV) in 2008 titled Assertion-Based Verification: Industry Myths to Realities. My claim is based on the Farwest Research 2007 study (comissioned and sponsored by Mentor Graphics) that found approximately 37 percent of the industry had adopted simulation-based ABV techniques, and 19 percent of the industry had adopted formal ABV techniques. Now, those of you who know me know that I am an optimist—and therefore the statistics from these industry studies reveal a wonderful opportunity for design projects to improve themselves. 😉 However, the problem of adopting advanced functional verification techniques is not just limited to ABV.  For example, the study revealed that only 48 percent of the industry performs code coverage. Let me repeat that, I said code coverage, not even something as exotic as functional coverage (which was observed to be about 40 percent of the industry)! Furthermore, only about 41 percent of the industry has adopted constrained random verification. Advanced Functional Verification Adoption

Now, we could argue about which is the best approach to measuring coverage or achieving functional closure, but that is not the point. The question is, how do we as an industry evolve our verification capabilities beyond 1990 best practices?

In my mind, one of the first steps in the evolutionary process is to define a model for assessing an organization’s existing verification capabilities. For a number of years I’ve studied variants of the Capability Maturity Model Integration (that is, CMM and CMMI) as a possible tool for assessment. After numerous discussions with many industry thought leaders and experts, I’ve concluded that the CMM is really not an ideal model for assessing hardware organizations. Nonetheless, there is certainly a lot we can learn from observing CMM applied on actual software projects.

For those of you unfamiliar with the CMM, its origins date back to the early 1980’s. During this period, the United States Department of Defense established the Software Engineering Institute at Carnegie Mellon University in response to a perceived software development crisis related to escalating development costs and quality problems. One of the key contributions resulting from this effort was the published work titled The Capability Maturity Model: Guidelines for Improving the Software Process. The CMM is a framework for assessing effective software process, which provides an evolutionary path for improving an organization’s processes from ad hoc, immature processes to developed, mature, disciplined ones.

Fundamental to maturing an organization’s process capabilities is an investment in developing skills within the organization. To assist in this effort, we have launched an ambitious project to evolve an organization’s advanced functional verification skills through the Verification Academy. In fact, we have an introductory module titled Evolving Capabilities that provides my first attempt at a simple assessment model. I anticipate that this model will itself evolve over time as I receive valuable feedback on refining and improving it. Nonetheless, the simple Evolving Capabilities model as it exists today provides a wonderful framework for organizing multiple modules focused on evolving an organization’s advanced functional verification capabilities.

I realize that evolving technical skills is obviously only part of the solution to successfully advancing the industry’s functional verification capabilities. Yet, education is an important step. I’d be interested in hearing your thoughts on the subject. Why do you think the industry as a whole has been slow in its adoption of advanced functional verification techniques? What can be done to improve this situation?

Harry Foster
Chief Scientist Verification

Harry Foster is Chief Scientist Verification for Siemens Digital Industries Software; and is the Co-Founder and Executive Editor for the Verification Academy. Harry served as the 2021 Design Automation Conference General Chair, and is currently serving as a Past Chair. Harry is the recipient of the Accellera Technical Excellence Award for his contributions to developing industry standards. In addition, Harry is the recipient of the 2022 ACM Distinguished Service Award, and the 2022 IEEE CEDA Outstanding Service Award.

More from this author

Comments

5 thoughts about “Evolution is a tinkerer
  • Thanks Harry for the excellent post. I am seeing that one of the main obstacles for adoption for advanced verification techniques , is the HW/SW boundary in terms of designers and verification engineers looking at each other through a barrier. Designers consider verification a SW job , and verification engineers consider design as a HW job. Practically speaking to get the best out of the process , there should be no barriers , and there should be seamless smooth diversity between design/verification , HW/SW. if both campaigns believe that nowadays the wall between HW/SW has been vanished , we would see smooth and fast pace adoption. it is amazing to see that the winners who are working on breaking that wall!

  • This is not only an EDA issue! In 1968, Dick Fosbury masterfully demonstrated his new high-jump technique by winning the Olympic gold. 12 years later, in 1980, nearly 1 out 5 Olympic finalists still used the old straddle technique – and of course lost… Quite an inspiring story: http://bit.ly/4ZWl5E

  • Adding to Mohamed’s comments – educating and changing the mindset of Design and Verification Engineers is crucial. The complexity of the verification task in designs today requires that it be taken seriously, which means verification engineers with the right skill sets, not designers writing tests after their done with RTL. But this does not absolve the verification eng from understanding the HW/Design. Likewise the Design Engr now needs to respect constraints imposed by the (more complex) verification methodology – a minor design change on an interface protocol may cause a lot more headache to the verification engineer.

    Finally, one aspect that the constrained random verification methodologies need to work on, is the ability to quickly bring up a verification infra-structure for early debug, that can later be extended for regressions. It just takes way too much time today to be of any use for the designer at the beginning to help flush bugs

  • Remembering back to the old Daisy days, one of their iterations of simulator was dynamic in nature, allowing quick edits to a design and then continuing the simulation. I don’t think we should have a dynamic DUT in a simulation, but I do think we might gain by having a dynamically interpreted verification language and a REPL (http://en.wikipedia.org/wiki/Read-eval-print_loop) or interactive shell, together with something like doctest (http://en.wikipedia.org/wiki/Doctest) that will lower the barrier to people doing any systematic verification at all.

    People for which programming is not their speciality seem to find it more productive to use dynamic scripting environments rather than statically compiled language environments (c.f. Matlab, SAGE, biopython, bioperl, …).

    Just as Ruby-on-Rails and Django made it much easier to do most web sites; we should look to dynamic languages to provide similar gains in allowing designers an easier environment for verification.

    Many Verification techniques rely on regression runs of many runs of very similar simulations or properties, etc. Whilst it is easy to buy and configure a compute farm to run multiple jobs, Vendors and customers need to work together in making licensing such compute farms realistic. If you verify, you will need a compute farm!

    – Paddy.

  • What if coverage closure is an evolutionary dead end? What proportion of failed projects have carefully measured coverage (and still implemented the wrong thing at the wrong time)? It seems the majority of the survivors have made it through the last 20 years without adopting the “new” techniques.

    Do you remember what we did before specman turned up? I and many others used to spend time identifying the “resources” in the architecture and implementation of a chip, then mapping out the transactions that targeted each of those resources and finally constructing tests that stressed each resource by generating lots of transactions that targeted each one individually. It was a lot less effort than constructing complicated metrics and found just as many if not more bugs. It wasn’t so good for the EDA companies because they couldn’t sell tools and courses to support the methodology.

    So is this evolution benefitting the design industry or the EDA industry? And what data can you bring to bear to support your answer?

    Geoff

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/verificationhorizons/2009/12/14/evolution-is-a-tinkerer/