A suggested approach to updating a 1200 x 800 pixel display at 1000 fps, would be to break up the display into a matrix of lower resolution OLED panels, ideally OLEDs with so-called "edge-to-edge active display". For instance, a 2 x 2 matrix of 640 x 480 OLED panels would provide a bit more than the specified resolution. However, these sub- panels selected must themselves allow refresh rates of 1000 frames per second, as well.
Each panel needs to be controlled through a separate signal channel. Depending on the capability versus price of the FPGA chosen, a single FPGA may be used to drive one or more of the panels.
This is similar to the way ultra-large displays are created for stage performance backdrops, for instance, using a matrix of standard large screen HD LCD or LED televisions. Each TV is typically driven off a separate video source. Allowance is made for bezel distances, cropping off an appropriate amount of the image at each edge of each TV.
As the application itself is not described in the question, an assumption is that a somewhat contiguous display is required. Unfortunately, using separate panels will not provide contiguous display area, as the connections to each OLED panel in the matrix have to come out somewhere. Thus, bezel-like gaps will need to exist between panels, similar to the matrix-of-TVs approach mentioned.
If this is unacceptable, the alternative is to select an OLED panel of the desired resolution, that brings out individual signal rows and columns to a connector and allows these to be driven in definable banks. Typical OLED panels with Chip-on-Glass (COG) controllers will not work this way, raw OLED panels will need to be sourced.
Individual banks of OLED rows / columns would then be controlled via separate channels and conceivably separate controllers, to achieve the desired end-result display.
In programming this technique is called double buffering. You have two memory buffers between the camera and the monitor. While the camera fills one of them the other is displayed on the monitor (whatever time it costs - 1, 2 or more frames). When the first buffer is full (the whole frame is read from the camera) the two buffers are swapped and the second buffer now is read from the camera and the first one is displayed on the monitor.
This way, some of the camera frames will be displayed 2 monitor frames long, some only 1 (if the camera is faster than half of the monitor frame rate) or 3 monitor frames (if the camera is slower than half of the monitor frame rate) but the synchronization will be automatically provided.
I hope, this explanation is clear enough and I understood the problem correctly.
Best Answer
Your monitor should auto-detect the resolution. You can eventually double-check this by hooking it to a PC with a VGA output, and force the screen resolution to 800x600. You'll see that the monitor will scale the image.
This scaling is done by the monitor, not by the video card. There is a "scaler chip" inside TFT monitors that takes care of that. More details can be found on the internet.