How do image sensors output in different resolutions

cameraimage-sensor

Say an image sensor has an array size of 1280 x 800 pixels. Its specifications state that it has output support for the following resolutions: WXGA (1280 x 800) and VGA (640 x 480).

My question is the following: If we choose to output images in the VGA format, which part of the original 1280 x 800 array is the 640 x 480 extracted from? Is it taken from the middle? Is it just a sub-sampled (or averaged) version of the original array?


[Optional follow-up: Note also that the aspect ratios of the two resolutions are different; could that have an effect on the camera's intrinsic parameters (i.e., pixel size, aspect ratio, principal point) when choosing the smaller resolution?]

Best Answer

Depends on the sensor type, but I believe it's generally just the whole sensor, resampled after readout. "Digital zoom", on the other hand, is the way to say 'crop' in marketing speak. For many high speed video cameras, they can read out a small portion of the sensor at a higher frame rate than they can read out the whole sensor so at the high end of the frame rates supported, there is generally a tradeoff between frame rate and resolution with an approximately constant value of 'pixels per second'. There are some really fancy sensors that have multiple sets of readout electronics that can read out pixels in groups for better low light performance and less processing overhead at a lower resolution.

Related Topic