Here’s the scene: you’re a hardware engineer at a conference sitting in on a talk about functional coverage. You’re there because you think functional coverage is important. You think you do a good job of building functional coverage groups but the title of the talk suggests otherwise. The speaker takes the stage…
Hi everyone. It’s good to see you all here.
Before I get started I just want to take a quick poll. By show of hands, how many of you are using constrained random verification? Right.. quite a few people… that’s what I figured.
Ok… keep your hand up if you use functional coverage to measure whether or not you’re finished your verification effort.
Alright… I see a few dropping but most of you have still got your hands up… that’s a good sign since we all know results from constrained random tests aren’t overly useful without functional coverage.
Ok… next… does anyone here test their coverage groups? I mean does anyone take the time to verify that their coverage groups, the actual code, is correct? Anyone? I see a lot of hands dropping… not a good sign.
Last… from the people that just dropped their hand I’ve got a question: should you trust your functional coverage model, probably the most important code you write, if you haven’t gone to the trouble of making sure it’s correct?
A fully populated functional coverage model has become a pretty important component of determining whether you’ve sufficiently traveled the design state space. It tells you that you’ve done all that needs to be done by observing all that needs to be observed. The coverage model is the benchmark by which DONE is measured, which is the way it should be… unless you’re coverage model is wrong (i.e. it’s a defect ridden pile of unverified code).
Luckily, writing unit tests with SVUnit is a great way to verify you’re coverage model is correct. Here’s how.
To start, testing your coverage model depends on an “obscure” built-in method for cover points and cover groups (I say “obscure” not because it isn’t important, but because I doubt many people actually use it). The get_inst_coverage() method is what we’re after; what it does is return a coverage percentage from a particular cover group or coverpoint. For example, if you have a coverpoint with 2 bins and you sample 1, the get_inst_coverage() returns 50. Sample both and you get 100. Sample neither and you get 0.
Here’s three unit tests that illustrate what I’m talking about. We have a coverpoint called addr_min_cp and it’s observing two addresses: ‘0’ and ‘4’. The first test confirms that when we sample addr_min_cp with ‘0’ we’ve hit 50%. The second test confirms that ‘4’ hits the other 50%. The 3rd test, where we combine ‘0’ and ‘4’, verifies addr_min_cp is correct.
With add_min_cp observing interaction with the low end of the address space, we can also observe interaction with the upper end of the address space. Here’s three other tests we can use to verify that our addr_max_cp is behaving as we expect.
Now how about a few equally spaced bins between min and max, as is often done by verification engineers to see that we’ve plucked an acceptable number of data points from throughout the address space. Here’s how we’d loop through 16 data points, verifying the coverage score along the way.
Important to point out from that snippet of code is that the get_inst_coverage() returns a real equal to ‘N * 1/16’, which means we need to expect a real or we find ourselves with a failing unit test.
Now the coverage points in those examples are pretty basic right? Who hasn’t implemented the pattern of MIN, MAX and a few in between? Anybody? And if we look at the code, there aren’t too many ways to screw that up…
…and yet there are ways to screw up just about anything, isn’t there, even basic coverpoints. Let’s say you define `ADDR_MAX in some other file as follows:
Nothing wrong with that, until somebody changes the definition for a test that’s being written and mistakenly changes that line to this without understanding the implications:
Now, as I suggest in the title, your coverage model is wrong.
What happens from here? Well, if you have unit tests changing that line wouldn’t be a big deal because two failing unit tests would flag a problem and the person that made the change could go back and fix it. Without unit tests, however, the chances that this defect gets shipped to a customer – via a product with an inadequately exercised address space – is quite high. And what happens when your customer comes to you and says they can’t reach the upper end of the address space?
Oh… sorry. It looks like we had a bug in our coverage model so we didn’t see that. But don’t worry, it was an easy one. It’s fixed now.
Sure it’s easy to fix, though how many times can you get away with “oh… sorry”? That depends on your customer and your track record. A couple times might not be bad; several times will be embarrassing; one too many can be catastrophic.
For as simple as it is to test your coverage groups with SVUnit, and for as critical as they are to determining progress and design coverage, I’d recommend erring on the side of caution.
NOTE: As of writing this, the build-in get_inst_coverage() method looks like it’s only be properly supported in Questa. I’m using version 10.1c_1. As of VCS version G-2012.09 and Incisive version 12.10-s008, get_inst_coverage() was not supported by Synopsys or Cadence. If you’re using newer versions, you’ll want to see for yourself if anything has changed. If you see a version of either that works, please let me know and I’ll post an update.
UPDATE: You’ll see in the comments that Cadence also supports the get_inst_coverage() method but you need to use the ‘-coverage <string>’ option on the command line (which I didn’t originally have). That’s confirmed for versions 12.10-s007 and 12.10-s008.
UPDATE: Count Synopsys in as well now as of at least vcs version G-2012.09. I had an AE help me out (he easily saw what I was forgetting). With the “option.per_instance = 1” setting in your covergroup, the get_inst_coverage() returns a valid result. Without, you’ll see that it returns 0 in all cases. All of Synopsys, Mentor and Cadence support unit testing covergroups!