Quality Lag and Debug Lag with Constrained Random Verification

If you’ve read Does Constrained Random Verification Really Work and Functional Verification Doesn’t Have to be a Sideshow, you’ll know that I’ve become a bit of a skeptic when it comes to constrained random. My opinion hasn’t changed much since those posts and I think I’ve got a couple visuals that will help people see the point I was arguing in Functional Verification Doesn’t Have to be a Sideshow, that a successful constrained random verification effort starts with directed testing… a lot of directed testing.

To recap… here’s the graphic we’ve all seen more than once.

It was probably in the early 2000’s when this was shown quite often in sales/marketing pitches and in technical papers. Every time some one referenced this graphic, it was used to note how much better constrained random verification was relative to directed testing. For just a little more effort up front, you get far faster and more rigorous coverage of your design state space later. I used to think this graphic was pretty accurate. In around 2001 when I started with Vera and started building constrained random testbenches, there was lots of new stuff to learn (which was great) and I was finding more bugs – more odd, corner case bugs – than I ever had before. Yes, it was costing more effort to build the testbench but the extra effort seemed worth it. Just as the graphic suggests.

Time has passed and now I think the blue – ideal – constrained random curve that we’ve all been shown numerous times is unattainable for most teams. It’s a great idea. It sells the technique, but I don’t think it’s at all realistic and I don’t think many teams reap the advertised benefits. In my opinion, what hardware teams see in reality is probably more like the dashed red line in this next graphic with a ramp up and productivity that fall short of the ideal.

What makes the dashed red line more probable than the blue line? Poor code quality (the Quality Lag) and long debug cycles (the Debug Lag).

Quality Lag

So you’re “done” your testbench (if you want to know what I mean by “done”, you can read about it here) and you’ve just written your first constrained random test. Your test constraints are fairly lax because you want to cover as much of the state space as possible as quickly as possible. You run the test with your fingers crossed.

The first issue you see is a time-out. Nuts! It takes a while, but you find the cause of the time-out, fix it up and run the test again. Another time-out. Bah!! Find it, fix it, run the test again. Null object access in your BFM. Forgot to initialize a transaction. Fix that, run it again. Still a null access, only this time it’s in your coverage object hanging off the BFM. Fix that, run it again. etc, etc, etc.

For constrained random tests, in order to make the steep progress immediately (the point at which you abruptly shoot northward from the initial development flatline), you need high quality code (aka: a bug free testbench). The problem is, we don’t build bug free testbenches. The development techniques we use, namely big upfront planning and coding everything all in one shot with little or no testing, produce bug riddled testbenches. Just to clarify, bug riddled testbenches are the opposite of what we need when we run that first random test. What usually ends up happening is what I’ve described above: several days of debugging silly 1st issues like misconnects and null objects, several days or weeks of debugging 2nd order issues (that ‘+1’ should be a ‘+2’ except when x==10) and several days or weeks re-writing code due to 3rd order issues (oops… I didn’t know <it> worked like <that>). These are the 1st, 2nd and 3rd order issues I talk about in Functional Verification Doesn’t Have to be a Sideshow.

From the point you’re “done” coding your testbench, there’s always extra effort required to improve your code to the point it’s solid enough to make decent progress (as the blue curve suggests). Initially, the testbench quality is so poor that you’re just barely making progress. But over time and as you fix more bugs, quality improves and you’re able to make decent progress. Quality lag is the time between.

Debug Lag

Here’s another scenario… you have a constrained random test failing and you don’t really know why. You’re confident it’s a design issue though so you file a bug report with the designer. The designer comes to you a day later…

designer: “can you tell me more about the bug you’re seeing?”

you: “not really. I know this instruction isn’t being handled properly but I’m not sure why.”

designer: “OK. Can you tell me what you’re test is doing?”

you: “well… it’s a random test. I know what instructions I’m sending in and I’m pretty sure the problem happens on the 27 instruction… but that’s about it.”

designer: “Can we build a test that sends in that instruction on it’s own?”

<time passes>

you: “The test I wrote with that instruction on it’s own passes. So I guess we can assume the problem in the random test has nothing to do with the actual instruction.”

designer: “Is there any other way to isolate what’s going on?”

you: “Not sure. I still don’t really know what’s going on.”

The other characteristic the constrained random curve depends on is tight debug cycles so you can keep the coverage momentum going. One problem though: constrained random is… well… random. It’s not always obvious what’s going on and you don’t always know where to look when something goes wrong. Basically, the problem space with constrained random testing can be immense and an immense problem space doesn’t facilitate tight debug cycles. The time required to navigate the problem space is debug lag.

But Constrained Random Is Better Than Directed Testing, Right?

Yes and no. I think they both have their strengths which is why I present the hybrid approach of directed testing for 1st and 2nd order bugs with constrained random for the 3rd order bugs in Functional Verification Doesn’t Have to be a Sideshow. When constrained random verification is the goal, we don’t often see directed testing as the first step in achieving that goal… but it should be.

This is what the hybrid approach looks like relative to the constrained random and directed curves.

 

If you want to read more about the mechanics and motivations behind it, I’d really suggest going back to Functional Verification Doesn’t Have to be a Sideshow to see the strategy I present near the end. It has ideas for when and why you use directed tests and how we overcome the quality and debug lags. If you’re team sinks a lot of time into debug, I think this approach can really help.

I think there’s huge value in this hybrid approach. It’s realistic, it acknowledges the strengths and weaknesses of directed and constrained random testing and it addresses 2 major issues that seem to plague all verification teams: quality lag and debug lag.

You shouldn’t have to wait for quality and you shouldn’t be wasting time with debug.

-neil

PS: quality lag and debug lag are issues I address in my Expediting Development With An Incremental Approach To Functional Verification lunch-n-learn. I do that talk remotely for teams that are interested. If you and your team are interested, let me know at neil.johnson@agilesoc.com!

8 thoughts on “Quality Lag and Debug Lag with Constrained Random Verification

  1. Neil,

    Excellent post and the graphs really depict the experience of engineers. In my opinion, the right approach towards CRV (constrained random verification) is to have strict constraints initially so as to generate tests that are almost directed. Slowly the randomness increases and finally we go back to further constraining with weigths so as to direct the tests towards coverage corners. The debugging of TB however still may lead to unexpected delays. I had written the + & – on CRV sometime back. Might be of interest –
    http://whatisverification.blogspot.in/2011/11/constrained-random-verification-and.html

  2. Neil,
    It’s always an interesting and delightful read of your posts. I totally agree what you said and I’ve sub-consiously using the same appraoch. Even when I do constrained random tests, I still tend to put a lot debug print on which random value/config setting a particular test and seed it is running at, so that usually save me some debug time or reproduce it in a direct test again to show to the designer. However these days all ASIC/FPGA are getting larger, it’s not un-usual to have a simulation run last few hours where I keep dosing off on my desk. Then when I try to multi-task, my minds all mingled up on which tests were about which on the separate work spaces after a while…Maybe I just need a bigger brain to hold all these muble-jumble!

  3. Well done, and a correct analysis. CR has always seem to me to be like testing the strength of a house by creating an earthquake, and then sifting through the rubble to see what has gone wrong.

  4. My personal experience matches this article, and because the RTL is never bug-free, one has to start off with repeatable, highly constrained (i.e. “directed test”) tests.

    But before verifying the RTL with the testbench, verify the testbench. I highly recommend svunit for that, plus of course any other standalone testing of the testbench. And of course the whole point about UT is that you flush out things in the requirements that are incomplete/inconsistent, before you actually mate up the testbench with the RTL.

    Many tickets I filed during UT of the testbench actually were against the RTL requirements or specification.

    If Formal Verification tools are available, then you should write your verification in SVA assertions as much as possible, with synthesizable structures transmitting/receiving DUT data. That way, the same code can be reused in both simulation and Formal verification.

  5. Nice post and it matched the lessons learned we had.
    And another thing I would like to raise here is if it takes less development time to build the test bench if you reuse in uvm against developing crv in pure SVTB only. The reuse across projects depends on what types of IP or category of functionality of IP design. It’s difficult to reuse the uvm tb of modem baseband and port over USB. But it always takes time if we consider to have reuse mindset during development. I had doubt in if we clearly well define the spec and scope of reuse tb in uvm or it is a waste and more code, more bugs prone. How do you usually consider if the Code you develop is for this project or reuse across projects ? Or life cycle of the tb code you develop ?

    1. Neil,Thank you for your answers.You wrote: If “complexity needs to be veieifrd as it accumulates” means “code needs to be veieifrd as it’s written” then I think we’re on the same page.Complexity of code does not directly correlate with the amount of code lines. I doubt that every piece of code that is written has to be veieifrd. Also, designers do re-use of previously veieifrd code patterns, and there is probably little sense to do verification again and again.Also, some pieces of design code require not just agile verification, but thorough verification using combination of directed, random and formal methods. Think about an arbiter, controlling an access to the common resource. Correct functionality of this piece of logic with just dozens of flops is critical for the system.Applying just software methods to such logic verification is clearly not enough.Thank you for the link to Cucumber. It is interesting, but requires an introduction of YAL (Yet Another Language).Speaking about what is needed to make tests regressionable Just compress all test results into the pass/fail status bit and let regression script understand it. For directed tests it is just comparison between expected and actual resutls. For random ones there is a need to develop testbench infrastructure monitors, scoreboards, checkers, connect all of them into one error reporting system and so on. Then, we introduce certain testbench development methodology, script support and so on. Do you consider such cases as part of Agile ? Or, is Agile all about quick directed testing with expected results?Regards,-Alex

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.