Thought Leadership

Verification is from Vulcan, Validation is from Pandora

By Doug Amos

At DVCon earlier this year, I was lucky enough to present to the munching masses at the Wednesday lunch. Now, some folks treat the DVCon lunch as a captive audience, and serve up some stodgy fare indeed.
That’s a pity.
As an industry, we are a highly intelligent, knowledgeable and discerning crowd; and it’s a mystery to me why some folks should think that all that switches off when we sit down to lunch. Do we really become sponges to soak up any sales pitch dumped on us, or do we just want to have lunch and some down time?

So anyway, the Mentor approach is to provide an oasis from the serious business of the proper DVCon programme, and I think folks are grateful for that. Please, tell me if I’m wrong.

However, that doesn’t mean that we don’t have some serious thoughts and questions to raise over dessert. This year, we compared and contrasted the different disciplines of Verification and Validation. We plan to repeat a couple of the arguments in Blogs over the coming weeks.

So, what’s all this about Vulcan and Pandora?

In many ways, Verification is a finite logical problem i.e. “what is the chance that my hardware has bugs?” and modern coverage-driven techniques can give us a logical metric-based answer to that question, with the aim of getting as close to “no chance” as possible. Our agony in Verification is that we NEVER reach “no chance” so we are left to decide when we are “close enough”, and the trade-off between conscience and pragmatism begins.

Validation, on the other hand, is not so objective and logical. Before a design starts, decisions have already been made about its desired form, function, space and pace. Not just in terms of the hard measurable parameters such as performance and power, but also “softer” factors, such as aesthetics, user interface, environmental impact or market acceptance criteria (long live focus groups!).

The ideal validation environment might involve building a series of progressively better versions of the design and exposing each to an infinite number of monkeys until one is sure that it is perfectly fit for purpose, and nothing can be broken; accidentally or deliberately. That’s the ideal but in the same way that the “no chance” Verification ideal is never reached, neither is complete Validation. This is partly owing to cost and time, those two omnipresent barriers to perfection in any project, but also largely because some aspects of Validation success are measured against subjective criteria, not logical ones (hence the Vulcan and Pandora references, geddit?).

I made the analogy of buying a shirt in order to highlight the difference between Verification and Validation. This is summarised in the picture below, but if you’d like the explanation to go with it, may I humbly refer you to Brian Bailey’s review of Mentor’s DVCon lunch presentation

 

 

It’s the software, stupid

A lot of the distinction between Verification and Validation tasks comes down to the design’s software content. Ah yes, but which software? Consider a software stack, with the lowest levels such as the boot code or the Board Support Package being most closely dependent upon the hardware functionality; and the upper levels of the stack i.e. the application and user space, being completely divorced from it. Typically, Verification requires just enough of the software stack in order to exercise the hardware under test; whereas Validation needs all of it – the full chip – the whole stack.

Some engines such as emulation and simulation can indeed accurately run software, but not all of it owing to a lack of execution speed. Even FPGA prototypes, the fastest pre-silicon engine, may lack that required speed for full stack operation. Often hybrid techniques are used; exiling some of the System functionality into a transaction level approximation, such as an Arm® Fast Model. In this way, an emulator can run enough of the software to do some serious hardware-software co-validation.

Taken to an extreme, the whole system can be modelled at the transaction level, creating a virtual prototype, which is not just a pre-silicon but also a pre-RTL approximation as well; in fact an excellent environment in which to validate user interfaces and aesthetics.

Some Really Useful Engines

These illustrations show that, while each engine represents a different trade-off between speed and accuracy, no engine should be considered solely as a Verification engine or a Validation engine; each could be either . . , or both!

So, it seems that we have a wide range of pre-silicon engines at our disposal, so what’s the problem?

The problem is that we can’t simply switch to the best engine when we want; it simply takes too much effort and time to bring-up the design in any particular engine, or hybrid of those engines, so teams may often continue to use an engine beyond its optimal Field of Use for their particular design. How do we make that engine change-over simpler?

Having a common “front-end” and common Verification IP (surely that should be Verification and Validation IP – Ed.) is a good goal towards which some EDA vendors are proprietorially progressing but today there is still a wide gulf in expertise and re-use between engines; a gulf that needs to be crossed.

That’s another pity.

Just imagine running an accurate full-chip, whole-stack validation environment, on an FPGA prototype, say, and then at the proverbial push-of-a-button, switching viewpoints and zooming in on a mixed-signal simulation of some key cells. That kind of viewpoint change would allow us to throw a searchlight on typical validation tasks; for example investigating the fluctuations of a specific voltage rail while the software stack is running multiple applications using real-world data. Surely that is desirable. But, it is the difficulty in switching viewpoints — to switch accuracy-speed trade-off — that is a limitation in today’s Validation environments.

Keep it Safe, Keep it Secret

Then there’s Safety and Security – probably the strongest drivers of Validation today. Validation might be considered as testing the readiness of your design for exposure to the rigors of real life. Some (in)famous validation failures in the past took many years and weird corner-case analysis to be uncovered (the Meltdown scenario is a case in point) however, most others became apparent rather too quickly (hmmm . . . my battery seems to be getting rather warm).

It’s your design vs the rest of the world; be careful out there. Eventually, all systems have to deal with the messy, analogue world, where other unpredictable and possibly malicious systems (and people) may break even the most heavily verified design i.e. Verification alone is not enough. The more complete and accurate one’s Validation scenarios, the less likely one’s design is going to be floored during the first round.

Ding-ding!

 

Doug – May 4th, 2018

Comments

2 thoughts about “Verification is from Vulcan, Validation is from Pandora

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/verificationhorizons/2018/05/03/verification-is-from-vulcan-validation-is-from-pandora/