开发者

Warping Images using cvWarpPerspective Results in Some Parts of the images out of the viewable area

I am trying to stich two images together.

In order to do so I extracted sift features and find matches on the two images using this C implementation.

http://web.engr.oregonstate.edu/~hess/index.html

After that I found the homography matrix using the matched points.

http://www.ics.forth.gr/~lourakis/homest/

But if I use this Homogra开发者_如何学Gophy Matrix in "cvWarpPerspective" function, some of the parts of the image goes out of the viewable area (negative corrdinates).

To solve this I tried to calculate the bounding box first by piping the four corners of the image through Homography matrix. And move the initial image then warp it. But this caused the warping result to change.

Is there any way for warping an image and keeping it in the viewable area?

I would appreciate any help. Thanks in advance...


As an exercise, I've tried the same a while ago and stumbled upon the same problem. I've solved it by calculating the bounding box first, as you have described it and then I wrote my own warping function. Warping is very simple, however you need to do lerp by yourself. As some pixel-wise weighting is required anyways for good results (like multiple pixels from different images might end up on the same output pixel and thus need to be blended), I did not feel bad of abandoning cvWarpPerspective.


Edit:

After some more work into this, I have learned a few things:

After you discover your homography between img1 and img2, and therefore have gotten a transformation matrix from 1 to 2, you're almost ready to run cvWarpPerspective.

First, though, you need to pad img1. You should be able to get the bounding box for img1 fairly easily. Make a new img of size boundingBox->width+img2->width*2, boundingBox->height+img2->height*2 and cvCopy img 1 into the middle.

If you tried cvWarpPerspective now, your transformation would be off because you translated img1. We need to make another matrix to account for this translation. If you placed img1 in the middle of the composite image, then you placed it with its upper left hand corner at (img2->width,img2->height). Make a translation matrix = {1,0,img2->width,0,1,img2->height,0,0,0). Now use cvMatMul(translation,homography,resultant) to get your final transformation matrix.

You are now ready to use cvWarpPerspective using the composite->sized image and your resultant matrix to warp image 1.

There's some more work to be done for stitching, but this solves your problem of the warped image going out of the viewable area. To complete the stitching, you'll probably need to paste image2 onto a composite-sized image, create a mask for your distorted image, and then copy the distorted image onto composite-image2 using that mask so you get a good looking stitched image.


I think you're on the right track. You need to account for the image translation that happened when you moved the image.

Another way to is to pad the source image around the edges. Depending on how much the perspective is changing, you may need to pad quite a bit. Also, the padding have to be done before feature matching and warping matrix. Obviously, you will be paying in terms of computation for using a bigger image.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜