开发者

How to calculate normalized image coordinates from pixel coordinates?

I need to calculate 3D points from corresponding images points. The algorithm to do this is explained here: http://en.wikipedia.org/wiki/Essential_matrix. It is 开发者_如何学Pythonnot working for me, I receive wrong results. 1. Procedure utilize "normalized image coordinates" - how can I calculate them? 2. R and T matrices (calculated from essential matrix) are the same as these which outputs from cv::stereocalibrate procedure?

This is what i am doing now:

  1. Stereo calibrate my setup
  2. Inverse camera matrix for both cameras
  3. Calculate the normalized coordinates of corresponding points for both cameras - by multipication of inversed camera matrix and homogenous pixel point coordinates

Rest operations are based on this article http://en.wikipedia.org/wiki/Essential_matrix in section: 3D points from corresponding image points

  1. Calculating x3 physical/real world coordinate of the point using Rotation and Translation matrices (given by stereo calibration procedure).
  2. Calculating x1 and x2 coordinates as it is written in the article

Thanks for your help.


OpenCV has a function that does just that -- cv::undistortPoints. Given the points, the camera matrix, and the camera's distortion coefficients, the "normalized" points will be output. You can read more on the page describing that function.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜