开发者

Shape tracking after filtered detection

I am using a kinect to do computer visual software. I have my program setup so that it filters out everything beyond a certain distance, and once something enters close enough that is lar开发者_如何学编程ge enough to be a hand, my program assumes it is.

However, I want to extend this functionality. Currently, if the hand leaves the depth filtered region, the software no longer tracks it's position. How can I follow the hand after I've recognized it, regardless of the depth?


I have not worked with the Kinect controller, but once I played with a laser scanner returned ranges, but only in a horizontal plane. But the technique we used could be applied to Kinect too.

When we found an object we wanted to identify we calculated the center point of the object [X,Y] (would be [X,Y,Z] for Kinect). For the next "frame" we looked for all points within a given radius r from [X,Y], for those points we found we calculated a new center [X,Y] that we used for the next "frame" and so on.

We used the maximum possible object velocity and the framerate to calculate the smallest possible r that made sure the object did not escape from our tracking between two measurement frames.


You can have a look at Mean Shift Tracking: http://www.comp.nus.edu.sg/~cs4243/lecture/meanshift.pdf

Then it's possible to track the blob even when it gets smaller or bigger (further or closer).


I have not worked with the Kinect controller, but you can try fast template matching algorithm implemented in: https://github.com/dajuric/accord-net-extensions

Just use your depth image instead standard grayscale image. The samples are included.

P.S. This library also provides other tracking algorithms such as Kalman filtering, particle filtering, JPDAF, Camshift, Mean-shift (samples included).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜