Verifying UVM Error Conditions with SVUnit UVM Report Mock

Verifying error conditions and UVM testbench checkers just got easier! The SVUnit UVM report mock lets you automate testing of UVM errors and fatals to increase confidence that the checkers in your testbench are defect free. The SVUnit UVM report mock is a scoreboard style checker where actual and expected errors are logged and compared to trigger a PASS/FAIL result.

Here’s how it works…

In a unit test, a user would go through the following steps to verify a uvm_error is being triggered properly:

  • set the expectation of a particular error by calling the svunit_uvm_report_mock::expect_error(…) function
  • apply the error causing scenario to the UUT
  • call svunit_uvm_report_mock::verify_complete() to ensure actual errors are trapped in the UUT as expected (and that there are no expected errors left outstanding).

To help get those points across, we’ll look at the uvm_report_mock example that’s packaged with SVUnit. In this example we have a UUT with a method called verify_arg_is_not_99(). In case it’s not obvious, verify_arg_is_not_99() is designed to trap variables that are set to 99 by asserting a uvm_error. Other values are ignored.

To verify our UUT is working as we expect, we of course need a few tests. Here’s the first from the example that tests the error condition.

Screen Shot 2013-03-12 at 4.33.35 PM

That test exactly follows the bullet steps above. We call the expect_error() method once because we expect one error. That pushes an item onto the expected queue of our log message scoreboard inside the svunit_uvm_report_mock. Then we call our UUT function with 99 as the argument. That pushes an item onto the actual queue of our log message scoreboard. Finally, we call the verify_complete. That does an in-order comparison between the actual and expected queues to confirm we’re getting the errors we expect.

To pass this test, this is the code we need in our UUT. An error is triggered for 99; nothing happens otherwise.

Screen Shot 2013-03-12 at 4.11.51 PM

At this point, you might be wondering how the svunit_uvm_report_mock knows about UVM errors. Well it’s not as magical as you might think. To do it, we redefine the uvm_error macro to call the svunit_uvm_report_mock::actual_error(…) function instead of the normal UVM reporter. The macros are redefined in the svunit_uvm_mock_defines.sv and they look like this…

Screen Shot 2013-03-12 at 4.16.14 PM

So instead of sending errors to your log file where you’d have to visually confirm they’re being asserted, we redirect them by mocking out the UVM reporting and automate the confirmation. For that to work, we have to make sure the macros are redefined as we intend. That’s done with includes at the top of the unit test file. The uvm_macros are loaded first, then the svunit_uvm_mock_defines are loaded to redefine the macros.

Screen Shot 2013-03-13 at 3.54.43 PM

To exhaustively verify our UUT, we need one more test to make sure values other than 99 don’t cause a problem. In this test, you’ll see we still call verify_complete() to make sure we get what we expect, but we don’t call expect_error() because we’re… uhh… not expecting an error.

Screen Shot 2013-03-12 at 4.28.49 PM

In this case, verify_complete() will return 1. That tells us no errors were expected and none were detected.

A final feature that we’ll talk about is the ability to expect a specific MSG and ID from your error. To do that, you pass in strings to the expect_error() function (the defaults are null strings which means the content of the message isn’t checked).

Screen Shot 2013-03-12 at 4.30.41 PM

You can see that test is the same as the first except that the MSG and ID are passed to the expect_error(). Just getting the error in this test case isn’t enough, the content of the message has to match also. If it doesn’t, the test fails.

That’s how to automate checking of uvm_error logging with the SVUnit UVM report mock. FYI… uvm_fatal is also supported so if that’s what you’re looking for, just do a search-and-replace to swap error for fatal. The mechanisms for both are the same.

Lastly, I’ll mention that this is version 0.1 functionality so if you’ve got any better ideas for doing the same kind of automation, I’d love to hear it!

-neil

Q. If you don’t test your checkers, how do you know they’re working?

11 thoughts on “Verifying UVM Error Conditions with SVUnit UVM Report Mock

  1. Very cool. I’m excited to try this out. I wrote unit tests for a scoreboard a few weeks ago, and since I didn’t have a way to check that it produced uvm_errors when it was supposed to I gave it an analysis port and had it produce little comparison transactions and write them to the port (basically just a class with a bit member that indicates comparison success/fail). My unit test gets those and makes sure they are as expected. Works pretty well, but this sounds a little simpler.

    1. Let me know how it goes! I’ve done something similar to what you describe but it didn’t feel right. It worked, but I like the mock because you don’t need to add anything to your code. the macros make it easy to redirect the functionality.

      -neil

  2. OK, finally getting around to playing with this. I have a simple unit test I’m developing where I don’t expect any uvm_error messages, but I’m getting one. I want the test to fail if that happens so I added this at the top of my unit test file:

    `include “svunit_uvm_mock_defines.sv”

    and this to the end of the test:

    `FAIL_IF(!uvm_report_mock::verify_complete())

    Just like your example above. The test now fails. That’s great. The uvm_error message, however, no longer appears in the log file. That makes it hard to figure out exactly why the test failed. Is there a way to still see the error messages? Looking at the svunit code, I think the answer right now is, no.

    It seems like uvm_report_mock::actual_error() could call uvm_report_error, like the actual uvm_error macro does, couldn’t it?

    1. You’re right, the answer is no. that was intentional.

      My thoughts as I put this together… verifying error conditions has always been a manual process (in that you have to cause the error and then visually confirm the detection in the log file). The visual confirmation is the warm fuzzy feeling in that you feel good when you see it. Not including the call to the uvm_report_error is my attempt at convincing people the automated check is worth more than the warm fuzzy feeling… and that the warm fuzzy feeling is no longer necessary.

      That’s a leap, admittedly. and considering you’re the first to comment maybe this is a leap too far. I hope you have the patience to try without for a while and once you gain confidence in the automated detection, you won’t need the warm fuzzy feeling anymore.

      …or, of course, you could just re-redefine the macro 🙂

      -neil

  3. OK, I just created my_svunit_uvm_mock_defines.sv that looks like this:

    `ifdef uvm_error
    `undef uvm_error
    `endif

    `ifdef uvm_fatal
    `undef uvm_fatal
    `endif

    `define uvm_error(ID,MSG)
    begin
    if (uvm_report_enabled(UVM_NONE,UVM_ERROR,ID))
    uvm_report_error (ID, MSG, UVM_NONE, `uvm_file, `uvm_line);
    end
    uvm_report_mock::actual_error(MSG, ID);

    `define uvm_fatal(ID,MSG)
    begin
    if (uvm_report_enabled(UVM_NONE,UVM_FATAL,ID))
    uvm_report_fatal (ID, MSG, UVM_NONE, `uvm_file, `uvm_line);
    end
    uvm_report_mock::actual_fatal(MSG, ID);

    Where I copy-n-pasted the uvm_error and uvm_fatal implementations in there before the uvm_report_mock calls. It does what I want. I don’t like the copy-n-paste, but I can’t think of a better way off the top of my head. There’s no way to call the original macro from inside the redefined one. Dang preprocessor!

    In this case it’s not a warm fuzzy I’m looking for, it’s “why did this test produce a uvm_error when I didn’t expect it to? What’s the error?” I agree that when you are checking for expected errors you don’t need to see them printed out. It’s when you don’t expect them that you want to see them.

    Here’s another thought. It made sense before that svunit did not fail a test if a uvm_error showed up, because possibly it was expected. Now that there is a mechanism to expect uvm_errors, maybe svunit could always fail a test if one shows up that was not expected? Would you still need to redefine the uvm_error macro to do that?

    1. Right. See your point.

      My preference is to still keep the uvm logging out of the loop (I find the mixed uvm/svunit messages confusing) but for what you’re suggesting, I’m sure I can find a way to give more info on error/fatal conditions. I modeled this on MockIO which is used for building embedded drivers. Just looking at it now, I like the way the errors are displayed… it shows the function call that happened incorrectly. I’ll propose similar here where instead of tee’ing to the uvm logging, I’ll print the macro as it was invoked in an svunit msg.

      how does that sound? It’ll probably be a couple weeks before I get to it but that seems like a decent way to go.

      -neil

  4. The concept sounds fine. I couldn’t find any MockIO examples in 5 seconds of googling, but it’s probably fine. Is there an svunit_message function or macro I should be using in my unit tests instead of like, $display and stuff?

  5. It would also be handy if there was a way to access and print out all errors that were expected, with flags indicating if detected or not, in the case of the user expecting several of them. This helps if the user is expecting two similar errors, and the matching function is mismatching the two (or if the user has them specified out-of-order).

    Erik

  6. I’d like to propose an improvement: make the code be like this, so that teardown() always checks for any UVM errors. Each test thus by default gets checked for any UVM errors. If the user wants to check for a specific error, they can do that within a specific test.

    class c_unit_test extends svunit_testcase;

    uut_c dut;

    function new(string name);
    super.new(name);
    dut = new();
    endfunction

    task setup();
    super.setup();
    uvm_report_mock::setup();
    endtask

    task teardown();
    super.teardown();
    `FAIL_IF( ! uvm_report_mock::verify_complete() );
    endtask : teardown

    task run_test();
    super.run_test();
    endtask

    `SVUNIT_TESTS_BEGIN

    `SVTEST(unexp_error)
    dut.gen_error();
    `SVTEST_END(unexp_error)

    `SVTEST(expected_error)
    uvm_report_mock::expect_error();
    dut.gen_error();
    `SVTEST_END(expected_error)

    `SVUNIT_TESTS_END
    endclass

  7. Erik/Bryan, started a comment that ended up with a new blog post. Take a look at http://wp.me/p1GmTc-GD for the features that allow using the global reporting functions as well as the macros as well as better debug output from the verify_complete(). note the comment that says v1.4 (which is the newest version). 1.3 had a bug that I just fixed :).

    -neil

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.