Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

How to learn constrained random stimulus based and assertion based testbenches?

Status
Not open for further replies.

matrixofdynamism

Advanced Member level 2
Advanced Member level 2
Joined
Apr 17, 2011
Messages
593
Helped
24
Reputation
48
Reaction score
23
Trophy points
1,298
Activity points
7,681
Especially since the arrival of high level verification languages, new verification techniques have become mainstream. These include constrained random stimulus generation based testbenches which utilize various methods to record coverage metrics and assertion based testbenches which include the introduction of property specification language based extension of existing languages. All of this also includes the SystemVerilog expansion of the preexisting Verilog language and introduction of things like OSVVM for VHDL.

I am used to writing directed functional tests and for the designs I have worked on so far, it is not clear how to go about applying the advanced methods I mentioned above to verify them. Although I understand the basic ideas behind the advanced verification methods from theoretical perspective, it is not clear how to put them into practice.

What should I do to get a practical understanding of how to use constrained random stimulus and assertion based testbenches? One thing could be to observe several projects where this has been implemented, but then I work with VHDL and do not know SystemVerilog or UVM. What would be the most reasonable thing to do going forward?
 

It can be difficult to separate a couple of concepts here - SV + UVM by itself does not teach verification theory, neither does OSVVM. But you can implement constrained random testing with functional coverage using either of these (and other things like UVVM, C, Matlab etc). A lot of the examples you will find online will often go into detail without really getting the theory accross.

Remember what constrained random actually means. In theory, you would randomise all your inputs to detect the cases that cause the design to fail. But in most cases, this is impractical and just downright pointless. For example, a packet header field may be 8 bits but only have 50 values that actually hold a meaning, with the rest "reserved". It likely doesnt make sense to test every "reserved" value, hence you constrain this field to only these 50 values (you may add some other "reserved" values to check it really is ignoring them in these cases).

Constrained also applies to interfaces too. For example, an interface may provide bursts of data. You may constrain the BFM (bus functional model) to only provide certain, realistic burst patterns, or randomise the burst size within certain parameters. This also comes under the reference of "constrained random".

This book covers a lot of the theory using a veriety of languages (the first edition was written before SV was around afaik).

So how do you learn? First - you need a different mindset. With directed tests, you usually come from an angle of "How can I get this DUT to pass the test". For proper verfication, you need to ask "How can I break this DUT?". You need to think about the testing before you create any of the DUT. In theory, you should be able to create the testbench entirely with the interface and design goals specification of the DUT. You need to treat the DUT as a black box. Ideally you are using well defined interfaces and you have created some good BFMs that can be re-used over any test using this interface. You will also need a model of the DUT of some sort (this could be a VHDL model, a file from matlab with expected outputs etc).

Another mindset change needed is writing verification like software. This holds for VHDL and particularly SV. In SV you are using classes, in VHDL you are using protected types, pointers and a lot of variables. Trying to write a testbench that looks like HDL is not going to get you very far. Having experience with an object oriented language can be a great benefit here in VHDL, as you will use a lot of OO like code. Here is an example style from many of my VHDL testbenches (I am an OSVVM User):


Code VHDL - [expand]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
process
  variable cov : CovPType; -- coverage from OSVVM
  variable rand : RandPType; -- random generator from OSVVM
  variable sboard : ScoreboardPType; -- scoreboard from OSVVM
  variable pkt_gen : packet_generator;  -- custom packet generator protected type;
 
  --etc
begin
 
-- add coverage goals to coverage
-- configure packet generator
 
while not cov.isCovered loop
  covPt := cov.randCovPoint;
  pkt_ptr := new byte_array_t'(pkt_gen.getNextPacket(covPt));
  axis_trans_array := to_axis_array(pkt_ptr.all, axis_if);
 
  generate_expected_output(axis_trans_array, sboard);
  send_axis_pkt(axis_trans_array, transaction_q);
  wait until input_driver_empty;
 
  cov.ICover(covPt);
  DEALLOCATE(pkt_ptr);
end loop;
 
cov.writeBin; -- coverage reports
ReportAlerts;  -- OSVVM Logging final report
wait;
end process;



While this is a simplified setup, it is a style that covers most of my testbenches. You will see it looks very much like OO language with protected types used a lot. While a lot of the above is custom, in-house code underneath, OSVVM does provide mechanisms to write all code for any interface in a style like the above.

I suggest playing with the Random Package to start with. Along with randomising your data (constrained to be legal), think about how you create a BFM for an interface randomising things like bursts and waits between bursts. It is usually the control logic that is the main problem in DUTs - the data path is the easy bit, but making it appear at the correct time and correct order is usually the big problem.

Next, move on to the scoreboard. This is simply a FIFO of items that can be checked. You push the expected values in via push() method, and then check the actual output using check(). It is fairly simply - the difficult part is generating the expected results and only checking value output.

If you can get your head around these two problems, you have now created yourself a basic constrained random self checking testbench. This can easily be added to some form of CI (continuous integration) tool so it can be automatically re-run regularly to ensure it is up to date and still working as the code changes around it.
 
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top