How to calculate normalized image coordinates from pixel coordinates?
I need to calculate 3D points from corresponding images points. The algorithm to do this is explained here: http://en.wikipedia.org/wiki/Essential_matrix. It is 开发者_如何学Pythonnot working for me, I receive wrong results. 1. Procedure utilize "normalized image coordinates" - how can I calculate them? 2. R and T matrices (calculated from essential matrix) are the same as these which outputs from cv::stereocalibrate procedure?
This is what i am doing now:
- Stereo calibrate my setup
- Inverse camera matrix for both cameras
- Calculate the normalized coordinates of corresponding points for both cameras - by multipication of inversed camera matrix and homogenous pixel point coordinates
Rest operations are based on this article http://en.wikipedia.org/wiki/Essential_matrix in section: 3D points from corresponding image points
- Calculating x3 physical/real world coordinate of the point using Rotation and Translation matrices (given by stereo calibration procedure).
- Calculating x1 and x2 coordinates as it is written in the article
Thanks for your help.
OpenCV has a function that does just that -- cv::undistortPoints. Given the points, the camera matrix, and the camera's distortion coefficients, the "normalized" points will be output. You can read more on the page describing that function.
精彩评论