Being that we’re a week away from TDD month on AgileSoC.com, I thought that an appropriate way to get people thinking in a different direction – yes… test-driven development would take us in a different direction – would be to dig up a functional verification article from a couple years ago that I co-wrote. A good part of the article focused on legacy process and it opens by taking a few shots at constrained random verification.
Constrained random verification is pretty mainstream in ASIC and FPGA verification these days, though it does mean different things to different teams. The argument for constrained random verification has always been that it’s a more productive way to cover the state space in increasingly large and complex designs. I used to believe that wholeheartedly. But after seeing and hearing about it fall short – from an efficiency point of view – many times, my current impression of constrained random verification is that it just doesn’t work as well as we all want it to.
In Agile Transformation In Functional Verification – Part I, as you’ll read below, I claim the way we approach constrained random verification results in a process that is fundamentally flawed. While strictly in terms of mechanics it may get us around the state space more efficiently, we’re still pushed to understand the entire state space up front and we end up building test benches in giant steps (as opposed to baby steps which we’ll talk more about in November). Impossible – I say – and getting impossible’er with every new generation of product teams develop. Too big; too complicated.
For more, here’s an excerpt from the aforementioned article that takes aim at bug up front design and constrained random verification:
Big up front design (BUFD) is common in IC development. In BUFD, a team attempts to build detailed plans, processes and documentation before starting development. Features and functional requirements are documented in detail; architectural decisions are made; functional partitioning and implementation details are analyzed; test plans are written; functional verification environments and coverage models are designed.
While BUFD represents a dominant legacy process in IC development, wide adoption of constrained random verification with functional coverage presents a relatively recent and significant shift in IC development. As has been documented countless times, constrained random verification better addresses the exploding state space in current designs. Relative to directed testing, teams are able to verify a larger state space with comparable teams and resources.
But is constrained random verification with functional coverage living up to its potential? Constrained random verification employed with BUFD contains one significant flaw. From a product point view, a design of even moderate complexity is near incomprehensible to a single person or even a team of people; the combination of architecture and implementation details are just too overwhelming. While the technique and associated tools may be adequate for addressing all these details, the human brain is not!
The flaws of constrained random verification and the limitation of the human brain are easy to spot during crunch time, near project end, when the verification team is fully engaged in its quest toward 100% coverage. Because of the random nature of stimulus, it is very difficult for verification engineers to predict progress in the test writing phase of a project. All coverage points are not created equal so the path toward 100% coverage is highly non-linear in terms of time required per coverage item.
To account for the unforeseen limitations in the environment, it is common for verification engineers to rework portions of the environment–or write unexpected and complicated tests–to remove such limitations. This is particularly relevant as the focus of testing moves beyond the low hanging fruit and toward more remote areas of the state space.
Taking the opportunity to rework the environment is rarely accounted for in the project schedule and can cause many small but significant slips in schedule. Ever wonder why tasks sit at 90% complete for so long? It is because those tasks are sucking in work that was not originally accounted for in the first place.
What is truly amazing is not the fact that tasks sit at 90% for so long, it is that it is always a surprise when it happens! This should not be a surprise. It is impossible for people to understand and plan a coverage model that will give you meaningful 100% coverage with BUFD. It is also impossible to comprehend the requirements of the environment and tests, especially when considering the random nature and uncertainty of the stimulus. BUFD will not give a team all the answers; rework of the environment will happen regardless of whether or not the schedule says so!
I’m interested to whether or not others share my skepticism with respect to constrained random verification. Has it lived up to it’s potential? Is constrained random as effective as we think? Could we be doing better?
Feel free to offer opinions… especially off-the-wall opinions and/or those that contradict my own .