Why I Like UVM Express

This week Mentor released an extension to UVM called UVM Express. Normally, when someone announces an extension to the UVM, it involves more code or more tools. Not so in this case. With the library passing 67,000 lines of code (can that be right??), Mentor isn’t just piling on more code. UVM Express is an “extension” that helps people use what’s already there.

Here’s a few excerpts from the UVM Express page on Mentor’s verification academy with some additional commentary:

The UVM Express is a collection of techniques, coding styles and UVM usages that are designed to increase the productivity of functional verification. The techniques include raising the abstraction level of tests, writing tests using BFM function and task calls, adding functional coverage, and adding constrained-random stimulus generation.

Seasoned (aka: skeptical) verification engineers that have seen their share of new product announcements promising “increased productivity in functional verification” and suggesting “raising the level of abstraction” might be tapping the back button by this point but I’d encourage those skeptics to read on. I think the “revelations” appear in the next few sentences. Continue reading

By Example: Test-driven Development of Verification IP

I’ve been putting a lot of time into developing new examples for SVUnit lately and as of wednesday last week, I’ve finished another to show how people can use SVUnit to do test-driven development of verification IP.

This particular example involves development of an APB master BFM. APB isn’t the most complicated of bus protocols but it’s a very good subject for an example like this because the code and tests are easy to understand (there’s an APB example in the latest UVM release also, presumably for the same reason).

UPDATE: for people looking at UVM Express that was announced by Mentor Graphics on Feb 22, this is my interpretation of how TDD can be applied at the lowest layer development of the BFM. Examples showing the addition of functional coverage and completion of an agent that includes sequence generation are still to come. You can see UVM Express on the Mentor website by going here.

Continue reading

Unit Testing UVM Components: The Making Of

Today we take another step into the practical with a demonstration of how SVUnit can be used to test UVM components.

In the 3rd installment of the SVUnit Demo Series, I take people through a simple – yet complete – example of what’s required to test a UVM component within the SVUnit framework (if you haven’t seen the video yet, I’d recommend watching it here before reading on). The example I put together, and more importantly the plumbing under the hood required to make it work took me quite a while to put together for reasons that I don’t really get into in the video… but I will talk a bit about them here.

The usage model for SVUnit involves sequentially running a series of classes or modules through a corresponding list of unit test methods. In UVM however, due to the tight coupling between the phase methods in all uvm components and the instance of the uvm_root that ultimately drives the invocation of each phase method, uvm components are run in parallel.

One usage model wanting to be sequential and the other coded for parallelism gives us two models that are fundamentally at odds… so for unit testing UVM components, I had to get creative :).

I wouldn’t consider myself a UVM expert so the best way forward wasn’t immediately obvious. First I attempted to disable the components I wasn’t interested in by temporarily removing them from the component hierarchy. That was a dead end however because of the local access restrictions. My second idea was to replace components with a dud that effectively had no implementation… which means if the dud ran in place of an actual component, nothing would end up happening. That too was a dead end because of the same local access restrictions of the component hierarchy (in hindsight I should have known right away this was a no-go but I was still learning!).

The last implementation that ended up working involved creating and adding a new uvm_domain that ran in parallel with the existing UVM domain (where the run phases are stored). In the video, I talk about idle components that do nothing and a single unit under test that is driven through the run phases, iteratively if necessary. The idle components end up being idle because the newly created domain they’re assigned to – after of course going through the common phases of build, connect, end of elaboration and start of simulation – ends up raising an objection in the pre_reset_phase. That objection effectively means that components assigned to the domain never advance further. That’s what idle means… never advancing further than pre_reset.

I was pleasantly surprised to see that components could be assigned to different domains at any time which means I could activate and deactivate components whenever I wanted simply by changing which domain they’re assigned to.

When a component is deactivated, it is assigned to the new idle domain by…

function void svunit_deactivate_uvm_component(uvm_component c);
  c.set_domain(svunit_idle_uvm_domain::get_svunit_domain(), 1);
endfunction

Similarly, where I show a component being activated, it is being assigned to the normal uvm domain by…

function void svunit_activate_uvm_component(uvm_component c);
  c.set_domain(uvm_domain::get_uvm_domain(), 1);
endfunction

That seems to be the trick to being able to run uvm components sequentially in SVUnit without adding any new control or requirements to the components themselves, something I was looking to avoid from the outset.

If you want to take a closer look at the code, you can become an SVUnit project member and early adopter by getting a hold of me at neil.johnson@agilesoc.com.

Bonus marks go to the person that looks at the code and can suggest a better way of doing the same thing!

-neil

UVM Still Isn’t A Methodology

A few months ago, I posted on an article on AgileSoC.com titled UVM Is Not A Methodology. The point of that article, was to encourage people to break away from the idea that verification frameworks like UVM truly deserve the label ‘methodology’.

In the article, I argue that to be called a methodology requires a number of other considerations go beyond the standardized framework and take into account the people using it:

  • A Sustained Training Strategy
  • Mentoring Of New Teammates
  • Regular Review Cycles
  • Early Design Integration
  • Early Model Integration
  • Incremental Development, Testing and Coverage Collection
  • Organization Specific Refinement

That article generated a lot of interest and a few comments. I got some complements from some people and some minor disagreement from a few others. I’ve summarize a few below.

Neil, you’ve outdone yourself. Excellent article IMO.

We’ll start with a good one :). I got that comment from a friend, a real expert in functional verification and a person I learned a lot from when I decided to specialize in functional verification. I really appreciated that.

Neil, UVM _is_ a methodology, but not in the same sense as you have described in your blog post. It is a methodology that provides a framework for designing and building testbenches… You are correct that it is not a complete verification methodology covering all the things you list. All of those things certainly have to be addressed by each verification team, but can they be standardized? Training, mentoring, and reviews are essential to a good verification methodology, but it would be difficult to provide meaningful standards beyond what is already in the software engineering literature. (read the unedited comment here)

That isn’t totally agreement but I took that comment as an acknowledgement nonetheless that in order to be complete, a methodology has to go beyond the framework to include some of the things that I identified. The fact that they can’t be standardized is a good reminder that teams using frameworks like UVM need to worry about the extra stuff themselves. No one can do that for them.

The article makes some valid points but falls into the same trap that I think a lot of people do; i.e. getting confused between the UVM code library and UVM the methodology … My personal view … is that the customer teams who insist on self-learning the eRM/URM/OVM/UVM library or the (e, SC, or SV) language and won’t ask for training or guidance are the ones who end up getting the least from the methodology…These same users achieve far less reuse and randomisation than they should be getting, and the effectiveness of their testing is much lower than it should be. Where customers have invited me in to help architect their flow … we’ve created some really nice modular environments that are reusable, maintainable and adaptable. These customer teams have become more productive and have then propagated that methodology experience to other teams in their company, much as the article describes. This is the real UVM, where code meets experience … I’m saying this not to boast about…my own abilities, but to stress the point that getting all fired up about a code library isn’t going to make you more productive or effective, there’s a whole lot of hard work and care goes into really building a good UVM testbench. (read the unedited comment here).

Again, that comment isn’t total agreement but it does highlight the fact that UVM is larger than a code library and that mentoring plays a big part in how effective teams can be when using it.

The last paragraph of that article states it properly. It is a framework you can build a methodology around, but it is not a methodology in and of itself. The fact that it took the author 1600 words to state that simple fact indicates the article is mainly marketing fluff from a consulting company trying to scare managers. Much like calling UVM a methodology is EDA marketing fluff to convince managers it is more than just a framework. (read the unedited comment here)

That one was my favorite! Complete agreement followed up with the accusation of fear mongering!

Always nice to see people taking the time to comment 🙂

neil

Q. What do you call UVM? Methodology? Framework? Something else?

The Newbie’s Guide To AgileSoC.com

For anyone that’s new to AgileSoC.com, here’s a guide to what we have. I have all the top ranked articles here. I also have my favorites… these articles aren’t necessarily the most popular but they’re the ones that I’m happiest with. Finally, a couple of sleeper articles.

…and don’t forget to follow the discussions on the linkedin group!

Top Ranked

  1. UVM Is Not A Methodology: This one is top ranked by a mile. Primarily for the verification engineers out there, this article discusses what teams need to keep in mind when adopting technology like UVM.
  2. Top-down ESL Design With Kanban: This article came together as I was reading 2 different books (ESL Models and their Applications (1st edition) and Kanban and Scrum: Making the Most of Both). It combines the modified V approach to system development that Brian Bailey and Grant Martin present and Kanban, which Bryan Morris and I have always thought as being hardware friendly.
  3. An Agile Approach To ESL Modeling: This is a general article for the ESL crowd. Why is modeling important, how modeling can fail and how agile can help modeling teams succeed.
  4. Agile IC Development With Scrum – Part I: the first of a two part video of the paper Bryan and I presented at SNUG San Jose in 2010. In the video, we talk about how hardware teams would have to evolve to adopt Scrum.
  5. IC Development And The Agile Manifesto: The Agile Manifesto spells out the fundamentals of agile development. This article shows how the manifesto is just as applicable to hardware development as it has been to software development.

My Favorites

  1. Operation Basic Sanity: A Faster Way To Sane Hardware: agile makes sense to a lot of people but getting started can be tricky to say the least. I like this article because it gives news teams a way to get started without changing much of what they already do.
  2. Top-down ESL Design With Kanban: top ranked on the site and also one of my favorites.
  3. Agile Transformation In Functional Verification – Part I: I think this is another good article that helps verification teams take the mental leap into agile development.

Sleeper Articles

  1. Realizing EDA360 With Agile Development: If you’re not into the EDA360 message from Cadence, then the title might scare you away. But this isn’t just more EDA360. The theme here is convergence in hardware development, how functional teams drift apart over time and how agile can bring them back together.
  2. Why Agile Is A Good Fit For ASIC and FPGA Development: I think this was the first article we posted. I go back to it periodically just to see if our early writing still makes sense. I think it does!

Neil