Thought Leadership

Time for Another Revision of the SystemVerilog IEEE 1800 Standard

By Dave Rich

SystemVerilogBetween Accellera and the IEEE, there have been seven revisions of the SystemVerilog Language Reference Manual (LRM) over the past 20 years. Five of those revisions were in the first decade. Many users continue to shun SystemVerilog because feature support from different tools and vendors of the rapidly changing LRM had been so inconsistent. To this day, people still stay with Verilog-1995 syntax and don’t use features added by Verilog-2001 (e.g. ANSI-style ports, power operator). So the brakes were put on the SystemVerilog LRM giving vendors a chance to catch up and giving users the stability they wanted. The end of 2016 was the last time any of the IEEE SystemVerilog technical committees met to add changes to the LRM.

But technology never stands still. Over the last 4 years, vendors have made extensions to their tools based on demands from customers, and the users are left with a hodgepodge of features with no or incomplete references (extending most built-in functions to be used in a constant function). Other users simply won’t wait for any extensions and begin working around language limitations by creating extra code packages or incorporating other languages into their flow (Chisel, Perl, Python, Ruby,…) . Now I don’t expect SystemVerilog to be the #1 language choice for every design and verification project out there, but all languages must evolve to stay relevant. And developers want to protect their investment in verification IP for as long as possible.

I think it’s time to start the revision process while keeping a delicate balance between stability and staying relevant. The good news is many feature proposals are already in place and tool vendors have already implemented a lot of them. There are a few features that were never completed in the LRM and always expected to be flushed out in the “next revision” (Real-number modeling, covergroup extensions, DPI). Also, there are lots of unnecessary restrictions in the language simply because the committee was worried about unintentional consequences, but tools have moved forward.

Stay tuned for an upcoming announcement about the start of the Working Group for the next revision of the 1800 standard. Late last year, the IEEE Standards Association approved a Project Authorization Request (PAR) to authorize a revision to the standard.  Where do you think the Working Group should focus its efforts for the next revision? I’d love to hear your ideas in the comments below or in the Verification Academy’s SystemVerilog forums.

-dave_rich 🧔🏻

 

Comments

7 thoughts about “Time for Another Revision of the SystemVerilog IEEE 1800 Standard
  • It might be nice to be able to say something like

    my_covergroup.my_coverpoint.my_bin.get_coverage()

    to be able to find out which bins were full and which were not.

  • What can be done to make functional coverage more extensible?

    covergroup cross syntax allows you to concisely define coverage that demonstrates you’ve reached complex states or modes in a design. Something like system power state crossed with the number of active PCIe lanes. It would be useful if transaction coverage could be crossed with that, but not have to be located in the same covergroup. There are tool-specific features that allow that, but it’s non-standard.

    One possible solution would be to allow definition of coverpoints and crosses outside of covergroups. They would compute an instantaneous value like continuous assignments, anytime an input changed, but sampling of coverage would still be controlled by covergroups. They would define the sampling conditions and could reference coverpoints and crosses defined outside of the covergroup.

  • This is great news! I have a laundry list of things that I’ve thought of over the years:

    1. UDN can take a class as a nettype definition. Kind of a half baked idea, I sent you an email about it a couple months back. The main idea is that I keep finding that I need to maintain multiple nettypes. For example, at the IP level I may need a nettype struct with more fields to handle some complex scenarios in verification, while at the SoC level I’m usually only checking for connectivity and that complex nettype is overkill.

    2. UDNs need an X and Z state definition baked in somehow. Back in the days of ‘wreal’ there was a `wrealXState and `wrealZState that was understood by the tools to mean contention or high-z. This was lost in the UDN definition. It would be great to have a standard `define for this or have the ability to assign Z or X to a UDN directly.

    3. Need to define how to handle constrained random reals and real coverage. All of the vendors support this already, though each has their quirks. Would be great to define this functionality in the LRM.

    4. Need to define how to handle UDNs through switch primitives. 2 of the vendors support UDNs through tranif1 and tranif0 gates. This is critical for modelling analog pass switches.

    5. Multiline string support! Being able to put a block of text inside of triple quotes (like python for example) would be a great feature.

    6. Some kind of standardization for docstrings using SV attributes. We chatted about this as well month or so ago. The attributes system exists and should be able to do the heavy lifting here. It may fall outside of the scope of the LRM, but it would be good to define a standard attribute for documentation. Then when you go to ‘hover’ over a variable/object/module instance/etc. in a downstream GUI debug tool, it could query for that docstring and display it as information in a pop up. Or you could have a 3rd party tool that automatically generated documentation based on the docstring attributes in a design/testbench. This would be a great step in modernizing the SV/UVM debug experience and give developers a standardized approach to documenting code.

    7. Need a way to get the current timescale and precision from within the code itself. I’ve run into scenarios before where I need the ability to query the current timescale and timeprecision settings for a particular block. For example, I may have #delays that need to be a specific real time (i.e. not relative to the set timescale). With a function that can query the timescale, I can adjust the value of the #delay dynamically.

    8. This is probably outside the scope of the SV LRM, but now that UDNs exist, it would be nice to change the upf_supply_nettype from a hardcoded struct of 2 32bit ints to a UDN composed of a real voltage and state enumeration. If nettypes could also be classes, we could also extend this base class to modify the resolution function or add additional fields. For example, it would be nice to be able to add a current field so that we could check for over/under current conditions during UPF simulations.

    9. Resolving multiple UDNs. Today all UDNs connected together must be the same time. This makes it difficult to create portable IP that utilize UDNs. Can there be a system to resolve ‘equivalent’ values on a net? For example, say I have a voltage reference that generates a scalar real UDN value of 0.7. This ‘0.7’ real value connects to the input pin of some 3rd party analog model that just has a simple wire. It would be good to be able to say that 0.7 === 1 for this net so that everything resolves properly. Kind of a hack, but connecting a bunch of types together usually is :).

    10. Wire as interconnect. All of the big 3 simulators support using a ‘wire’ as an ‘interconnect’ as long as the interconnect rules are followed. Can this be codified in the LRM as well?

  • 1. python/JavaScript like generator with yield semantics.

    Generator being defined inside a class can be an alternative and better choice against uvm_sequence and its body task. In a lot of times our testbench needs a stimulus generator but don’t drive them directly on the interface. It is too heavyweight to have a sequence/sequencer/driver for this purpose. And simpler solution of SV task run by a forked SV process still need extra communication facility between it and consumer (the testbench). This is always a burden for testbench designer.

    python generator can be a good solution. Each time testbench calls a generator, the generator generates (yields) a transaction for it and keep its status. The SV debugger can provide step capability of going from caller into callee and jumping back at yield statement. It is always hard to debug producer/consumer code in parallel SV processes.

    2. packed union to support shortreal and int together, like C.
    3. members inside packed union don’t have to be the same size.
    4. variable declaration in the middle of task/function body, after zero or more statements, like C++.

    5. enhanced DPI for variable sized array passing – There are 4 kinds of data passing direction:
    a. SV function calls C function and variable sized array passing from SV to C. This has been doable through open array API.
    b. SV function calls C function and variable sized array passing from C to SV.
    c. C function calls SV function and variable sized array passing from C to SV.
    d. C function calls SV function and variable sized array passing from SV to C.

  • Hi Dave:
    I am collecting coverage information in vendor tool GUI. But it can not show which case contribute to the coverage, which troubles me a lot. As in some cases, I want to know why the coverage is not as what I expect the reason behind it, so I need to know exact testcase.

  • Some observations:
    1) Verification effort is 2-5X the effort of coding the DUT. So that means, if testbench is same quality as DUT, then 2-5X as many bugs will be found in testbench as in DUT.
    2) property-checking is about 5 times more productive than writing test-cases.
    3) second place is unit-testing (which includes Mocks).
    That means, we want the language to support as much property-checking as we can for verification. After that, we want improvements to enable unit-testing. After that simulation.

    A) Ada SPARK supports property-checking in their classes. But this requires that the LRM itself be rigorously defined so that the meaning can be efficiently and effectively verified using property-checking (i.e. you don’t require infinite memory or infinite time, because the LRM allows for a lot of vagueness).

    I’d propose taking the same approach as Ada: take a small subset of the language that is already clear, and then add property-checking to it (effectively, this will look like adding assertions into the interface declaration). Initially, these will just act as runtime checks, but over time, property-checking can be added using new tools, or extensions to existing tools.

    Alternatively – since GCC already supports Ada, add DPI extensions that make Ada/SV interaction cleaner and automatable. Then people can just code in Ada/SPARK.

    Now I realize that this goes against the whole trend of rapid testbench creation using soft-typed languages. But my personal experience has been that one has to write huge amounts of unit-tests on soft-typed languages, and that this takes far more time than if the compiler caught the problems. There is industry data agreeing with this.

    Also – strongly-typed languages have less maintenance costs, because when a new person picks up a loosely-typed language, they have to understand all the subtleties of the code, plus the language, or else they’ll insert a new bug when fixing an old one.

    I’m looking at the overall cost-of-ownership, not how quickly can I get a prototype going.

    Lastly – since *everything* is on the internet, everything can be attacked. So we need things like property-checking to prove that hardware won’t let bad things through.

    The European equivalent of the FAA is looking to create a “DAL A++” because DAL-A isn’t strict enough quality for aircraft.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/verificationhorizons/2020/07/30/another-revision-systemverilog/