Electronic – Writing synthesizable testbenches

hdlsynthesistestbenchverification

I'm just starting to learn SystemVerilog and work with FPGAs, and so far I haven't found a satisfactory way to test my code. I'm coming from a software background, and I have always been writing thorough automated tests for my code. I have been using JUnit-style frameworks (with lots of domain-specific wrapping on top, to reduce boilerplate), as well as QuickCheck-style frameworks, and found ways to use these frameworks to write concise test code that nevertheless provides me with high confidence in the product I'm developing. I haven't yet found anything equivalent for hardware description languages.

Introductory Verilog texts typically present testbenches that are just driving the input signals. These testbenches are unreadable repetitive walls of code (signal assignments) without assertions; these tests are not even automatic. Some texts try to incorporate machine-checkable assertions into the testbench code so that one does not have to look at waveforms to determine whether the test passed or failed. Still, the "wall of repetitive text" problem remains.

I researched the issue further, and found that the current industry standard for RTL verification is UVM. The idea does look better to me, I haven't looked in detail into it, but for me the big disadvantage of UVM is that UVM testbenches are not synthesizable.

If I can't run my tests on the actual FPGA, how can I be confident that my synthesized design works correctly? I understand that there is a high chance that it will work after the simulation tests have passed, but there are a lot of assumptions involved (that my code is not racy, that my code meets timing requirements, that the synthesis tool is correct etc.)

A parallel in the software world would be developing a program in C, compiling with gcc and testing it on x86 on Windows, and then compiling it with Clang and running it in production on ARM on Linux. The program should work, assuming that it does not have undefined behavior, does not have non-portable assumptions about the execution environment, both compilers don't have bugs affecting the program. This is a lot of assumptions that most software engineers would not make and instead would just run their tests on the production configuration.

Am I fundamentally misunderstanding something about real-world hardware design process? How are designs verified once synthesized into FPGAs and actual silicon? How to run existing test cases on designs in FPGAs and silicon?

Are there any industry-standard practices for synthesizable testbenches? Do people actually write synthesizable testbenches?

Best Answer

A general approach is to use more abstracted tests as you progress up the stack in terms of design complexity. Yes, you need to trust the tools (and the checking tools like logical equivalence proofs), but for a whole-fpga design (a) there won't be space for a testbench, and (b) exhaustive coverage will take a long time.

A good approach is to use exhaustive or random simulation testbenches at sub-module level, and more functional or vector based tests at full system level. For an FPGA which forms part of a complex design, you might end up needing to build a system emulator in hardware to drive the ports.

A suitable approach depends on your particular design, and how easy it is to generate test stimulus, capture the result and compare with the model (which might be simulation, or might be a higher level software model).

I think that due to the big variations in application, its hard to come up with a standardised approach. Sometimes FPGA is used to accelerate testing of the design, sometimes its the end product. Generally, you can trust that the resulting netlist (assuming it meets timing) will be honestly reproduced by the FPGA fabric - the abstractions are simpler than the transformations that apply in mapping C code to assembler code in the context of an OS.

Related Topic