Opencv – image coordinate to world coordinate opencv

cameracamera-calibrationcomputer visionopencvprojection

I calibrated my mono camera using opencv. Now I know the camera intrinsic matrix and distortion coefs [K1, K2, P1 ,P2,K3 ,K4, K5, K6] of my camera. Assuming that camera is place in [x, y, z] with [Roll, Pitch, Yaw] rotations. how can I get each pixel in world coordinate when the camera is looking on the floor [z=0].

enter image description here

Best Answer

You say that you calibrated your camera which gives you:

  • Intrinsic parameters
  • Extrinsic parameters (rotation, translation)
  • Distortion coefficients

First, to compensate for the distortion, you can use the undistort function and get an undistorted image. Now, what you are left with is the intrinsic/extrinsic parameters and the pinhole camera model. The equation below taken from the OpenCV documentation explains how to transform 3D world coordinates into 2D image coordinates using those parameters:

enter image description here

Basically, you multiply the 3D coordinates by a projection matrix, which in turn is a combination of the intrinsic parameters (the first matrix in the equation) and the extrinsic parameters (the second matrix in the equation). The extrinsic parameters matrix contains both rotation and translation components [R|T].