Getting More Value from your Stimulus Constraints
Verification engineers put lots of effort into writing and tuning constraints for random stimulus. It’s critical that the constraints correctly express the valid relationships between the stimulus variables. Otherwise, invalid stimulus will be generated or, worse, important valid combinations of stimulus will not be generated.
When it comes to bug hunting, running open-loop random stimulus is recognized as a good way to ensure that cases are exercised that the verification engineer wouldn’t intuitively think of. However, the very constraints that verification engineers work so hard to perfect get in the way of this goal by introducing random-resistant cases – value combinations that have an extremely low probability of occurring.
Consider the SystemVerilog class shown in Figure 1 below to see just what a dramatic effect a few constraints can have on the cases that a constraint solver produces. One simple constraint skews the entire random distribution!
This type of skewed distribution is easy to see and adjust for when the variable combinations are monitored by functional coverage. However, let’s face it, the whole premise of using random stimulus to find bugs is that random generation will produce cases that we didn’t think of (and, thus, didn’t create functional coverage for).
What if the very constraints that engineers spend so much time creating and refining could actually help ensure that corner cases are hit? If you’re attending DAC this year, come see a poster paper titled “Strategy-Driven Stimulus Generation: Constraint-Guided Test Selection” that proposes an approach that leverages the constraint description to identify high-value stimulus values and get more value from bug-hunting simulation runs:
Session Title: Designer/IP Track Poster Session – Wednesday
Session Number: 302
Presentation Title: Strategy-Driven Generation: Constraint-Guided Test Selection
Date: Wednesday, 6/4/2014 12:00-1:30PM
Room: 100
How do you ensure that your random simulations continue to provide incremental value, and aren’t just testing the same thing over and over again?