Electronic – Significance of timing simulation for FPGA

fpgasimulation

I have started to wonder what is the significance of the timing simulation for FPGAs nowadays. I can easily justify what is the idea behind that by quoting some Xilinx materials:

Performing a thorough timing simulation ensures that the finished design is free of defects that can easily be missed, such as the following:

  1. Post-synthesis and implementation functionality changes caused by the following:
    Synthesis attributes or constraints that can cause simulation/implementation mismatches, such as translate_off/translate_on or full_case/parallel case.
    UNISIM attributes applied in the UCF file or using synthesis attributes
    Differences between synthesis interpretation of language in different simulators

  2. Dual-port RAM collisions

  3. Missing or improperly applied timing constraints

  4. Operation of asynchronous paths

Howevery, I do not know any engineers or companies in my area that would perform timing simulations for FPGAs. They just assume that if the timing constraints are correct and there are no timing violations after the place and route, then everything is fine.

I can understand this approach as timing simulation can take a lot of time for complex designs and, to be honest, I have never heard or read any story that timing simulation for FPGA has helped in fixing some bug that wouldn't be a bug on the functional level of the design.

Can we just say that nowadays timing verification of the FPGA designs has been reduced to trusting outputs of the vendors tool chains?

Best Answer

IO timing is of more concern than internal timing, because as you say, the tools are "good at it". Having said that, the tools are only as good as your constraints; you can reliably count on getting back the same errors you specified, just as you can reliably count on getting back, post-synthesis/PAR the same logic bugs that you didn't discover in rtl verification. According to Wilson Research survey that Mentor Graphics commissioned, timing errors are second only to logic errors as the cause of re-spins... ASIC or FPGA, no matter, you just spend less money and time re-spinning an FPGA. It is important to remember that the timing constraints are applied on the front-end of PAR as an input, and checked on the back-end against the PAR netlist results. That only guarantees that you got back what you specified.... miss a specification, or wrongly specify it otherwise, and you will likely have issues in hardware (#3 above). Minimizing clock domains and utilizing sequential design techniques are the best ways to avoid errors. Avoid transparent latches at all costs (sometimes you can't), and be diligent on clock domain crossings... you can "IGNORE" clock domain crossings in your constraint file if you have guaranteed, through design review and/or CDC tools, proper crossing techniques (i.e. Satisfy Nyquist... both of these are addressed in #4 above)

If you have built agents (UVM terminology... think "BFM" if not familiar with UVM) properly with timing, normally configured "OFF" for functional sim, and turned "ON" for the post-PAR netlist sim with back-annotation, you can find IO timing errors. Having said that, you are at the behest of the board designer to have provided the proper IO constraints... the agent to which you interface (a DSP, CPU, whatever) will have specifications for how the signals are delivered/received (min and max skews/timing), the trace will have some amount of delay (hopefully, "negligible", but easily accommodated in the top-level TB) and your ASIC/FPGA IO will have to accommodate the numbers the board designer gave you. If you are "in spec" and the agent timing is implemented properly, and you are having failures at the IO (maybe your verification person wrote assertions to capture these errors), you and the board designer are going to need to review his timing analysis. Point is, you/me prefer to do this in a back-annotated simulation where you have the visibility you need, not in the lab where you have to use scopes, etc.

I would suggest, unless you are responsible for integration too, that your goal is to stay out of the lab, and you do that by having a robust verification environment that allows you to find logic bugs and, maybe, timing errors.

Having said all of that, I don't think most people do back-annotated simulations until they have a HW issue and suspect timing.... it is nice though if the TB is already created to simulate with constrained-random timing based on the specifications.