Electronic – Profiling for FPGA requirements for a high-performance camera

camerafpgaimage-sensorlvdssata

I would like to know how to do the profiling of an image acquisition and storing pipeline on an FPGA based system, capturing images from a CMOS image sensor through LVDS interface, do some basic image processing and then storing the data onto two SSDs.

I'm talking about 100 fps, 11.94 MegaPixels, 16 bit per pixel, so the bandwidth is quite huge: it's not about the classic cameras we've used to.

Best Answer

The data bandwidth you mentioned is certainly part of the calculation, but only the beginning: the FPGA and the camera module need a compatible interface that can reach the required speed.

Whether your processing pipeline can be realized depends very much on your definition of "basic image processing". Ideally your algorithm is parallelizable so you can create multiple instances, and optimized for FPGA resource usage to avoid running out of limited resources like multipliers.

Resource usage on FPGAs is not always linear, so five copies of the same logic may use ten times as many LUTs as four copies, simply because you've run out of some "special" blocks and the fifth instance tries to emulate it, or because an instance needs to be wrapped around a special block.

Speaking in the abstract: you can always compile and simulate your design with the FPGA vendor's toolchain, even without the actual hardware. If the constraints are properly specified and synthesis succeeds without showing timing errors, then it should also run -- there is no dynamic reallocation of resources that would make behaviour unpredictable unless you explicitly add that (which may be necessary, e.g. to share multiplier units).

Interpreting errors from FPGA compilations and optimizing designs are rather complex topics though, which entire books have been written about. The compiler reporting a lack of resources could mean that you either need a larger FPGA with more tables, or you need to rewrite the algorithm, or both.