开发者

implementing Image stabilization opencv, c++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 6 years ago.

Improve this question

Today, I have lots of question and I need any kind of help, even in some parts of my question. These questions are really urgent to be answered so please try the best with me.

I'm working on Image Stabilization by using optical flow method. This step is done correctly and gave nice results, now I want to implement the stabilization for the video, which I have 2 ways to do that: First way: I have obtained the translation and rotation matrices between the features of the first frame and the next frame. Now, my supervisor suggest me to do like this; Make a canvas bigger twice than the original frame and copy the first frame in the middle. For each next frame, use the transformation I have obtained to calculate an offset and a rotation. Use these parameters and copy the next frame in its corrected position. This should produce a corrected video.

So how to do that? and Please about help with functions in openCV if it's available.

Second way,

I was trying to do it my own way, which I was trying to get the corrected video by warping the image, which I have obtained the homography matrix but it doesn't work with the function cvWarpPerspective().anyway, is 开发者_JAVA百科that correct what I'm doing here to get the stabilized image or what you suggest?

Question:

1- Homography matrix give negative values is that correct?

2- As I mentioned above about the rotation and translation matrices which I have to make this equation in order to return the estimated feature in the second frame to its place in the first frame

Y = rotation matrix* x + translation matrix

which it give almost perfect results sometimes and another times give very awful results which I get negative values and much away from the location they supposed to be at, Why it's like that?

Please answer what you can answer even if it's sub question

Thank you so much.


It appears you might be having wrong correspondences which spoil your algorithm for recovering the global alignment (BTW, which algorithm do you use?). Trying to see what happens on a synthetic dataset with only a handful of correspondences might help.


Homography matrix give negative values is that correct?

Probably not. I guess that would require viewing the image from "behind".

another times give very awful results which I get negative values and much away from the location they supposed to be at, Why it's like that?

some ideas:

  • an int type overflow?
  • features detected incorrectly?
  • if i remember correctly, homography matrices only work when translation is not NULL
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜