Tracking blobs in Aforge
I looked and looked. Does anybody know how to track blobs from Aforge? I know they don't have it implemented but 开发者_C百科I would really need to use Aforge because of the rest of the code I'm using. I saw some reference to Kalman filtering but I need some implementation and not theories.
tnx, v.
The AForge.NET BlobCounter will provide the blob finding, though it's fairly simple and won't support 'broken' blobs. If you want to implement some simple blob tracking, a few things you might consider:
If your blobs are occasionally fragmented, you may need to perform some clustering (finding groups of center of mass locations to combine small fragments) to get a good estimate of the location. When analyzing multiple frames, this increases the chance of encountering boundary conditions such as broken blobs, so it's important to consider. Alternately, if you have good control over conditions (such as lighting), that may be sufficient. Minor (only a few pixel) breaks can be resolved with repeated dilation/erosion operations before the blob find, though this can also amplify noise and reduce the positional accuracy.
For the actual tracking, you have a few approaches. Kalman filtering can give you very good accuracy (sub-pixel), as it integrates information from multiple frames. If you don't need that level of accuracy, you might consider a very simple algorithm such as always picking the sufficiently large blob that was closest to the most recent location. This works if the object is not moving very quickly and you don't have other blobs popping up near your object being tracked. If you need better analysis performance, you might also be able to estimate the velocity from the last two frames and use that to limit the region you have to consider when searching for the blob.
If you need to track a high-velocity object, that becomes a bit more challenging. Here is a case where you might try to combine blob-finding with template-matching. You can create a template based on the blob-find and match the template against subsequent blobs to score them based on their pattern and not merely their size/location. This requires that the blob appear reasonably consistent over time, which means the model's physical shape and lighting conditions must remain fixed.
UPDATE in response to your question:
Only have a few minutes this morning, so no actual code, but the basic idea is this:
Only consider blobs greater than a configurable size (you'll probably have to determine this empirically.)
Retain information on last two blob locations found and the times at which they were sampled. Let's call these vectors in R2, p1 and p0, at times t1 and t0.
If you assume that velocity is changing slowly, then a preliminary estimate at time t2 of the new location p2 = p1 + (t2-t1)*(p1-p0)/(t1-t0). This may or may not be a good assumption, so you'll want to verify this by capturing your object under the required range of motions.
You can optionally use this estimate to restrict your blob search area to a sub-image centered on the estimated location. After you perform the blob find, take the blob that's closest to the estimated location as your new location measurement.
One side effect of the above is that you can work with the estimate if, for some reason, the blob find fails during one frame. It's dangerous to allow this extrapolation for too long, but it can give you some tolerance for minor noise spikes.
You can probably see how this could progress further to include an estimate of acceleration from recent frames or integrate velocity/acceleration from multiple frames to better extrapolate a likely location for the next sample. You could also start to trust that the estimate (with accumulated data from the current and previous frames) is more precise (and perhaps accurate) than the actual measurement. Eventually you wind up with something like the Kalman filter.
精彩评论