开发者

How to make white blob tracking for video or camera capture on Emgu?

I want 开发者_如何转开发to make program using C# with Emgu that can detect white blobs on images from camera and also track it. Also, the program can return IDs of tracked blobs

Frame1: http://www.freeimagehosting.net/uploads/ff2ac19054.jpg

Frame2: http://www.freeimagehosting.net/uploads/09e20e5dd6.jpg


The Emgu sample project "VideoSurveilance" in the Emgu.CV.Example solution (Emgu.CV.Example.sln) demonstrates blob tracking and assigns ID's to them.

I'm a newbie to OpenCV but it seems to me that the tracking of only "white" blobs may be harder than it sounds. For example, the blobs in your sample picture aren't really "white" are they? What I think you are really trying to do is "get the blobs that are brighter than the background by a certain amount" i.e. find a gray blob on a black background or a white blob on a gray background.


It depends what's your background like. If it is constantly dark like on those images you attached, then you should be able to extract those "white" blobs with some threshold. For any smarter segmentation you'll need to use some other features as well (e.g. like correlation if your object is color consistent).


I cannot say the code will work because I haven't tested it.

The general idea is to take the captured frame (assuming you're capturing frames) and filter out the noise by modifying the saturation and value(brightness). This modified HSV image is then processed as greyscale. Blobs can be labeled by looping through the blob collection generated by the tracker and assigned id's and bounding boxes.

Also, you may be interested in AForge.net and the related article: Hands Gesture Recognition on the mechanics and implementation of using the histogram for computer vision.

This is a modified version of custom tracker code found on the nui forums:

static void Main(){
    Capture capture = new Capture(); //create a camera captue
    Image<Bgr, Byte> img = capture.QuerySmallFrame();

    OptimizeBlobs(img);

    BackgroundStatisticsModel bsm = new BackgroundStatisticsModel(img, Emgu.CV.CvEnum.BG_STAT_TYPE.FGD_STAT_MODEL);
    bsm.Update(img);

    BlobSeq oldBlobs = new BlobSeq();
    BlobSeq newBlobs = new BlobSeq();

    ForgroundDetector fd = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
    BlobDetector bd = new BlobDetector(Emgu.CV.CvEnum.BLOB_DETECTOR_TYPE.CC);
    BlobTracker bt = new BlobTracker(Emgu.CV.CvEnum.BLOBTRACKER_TYPE.CC);

    BlobTrackerAutoParam btap = new BlobTrackerAutoParam();
    btap.BlobDetector = bd;
    btap.ForgroundDetector = fd;
    btap.BlobTracker = bt;
    btap.FGTrainFrames = 5;

    BlobTrackerAuto bta = new BlobTrackerAuto(btap);


    Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
    {  //run this until application closed (close button click on image viewer)

        //******* capture image ******* 
        img = capture.QuerySmallFrame();

        OptimizeBlobs(img);

        bd.DetectNewBlob(img, bsm.Foreground, newBlobs, oldBlobs);

        List<MCvBlob> blobs = new List<MCvBlob>(bta);

        MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
        foreach (MCvBlob blob in blobs)
        {
           img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
           img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
        }

        Image<Gray, Byte> fg = bta.GetForgroundMask();
    });
}

public Image<Gray, Byte> OptimizeBlobs(Image<Gray, Byte img)
{
    // can improve image quality, but expensive if real-time capture
    img._EqualizeHist(); 

    // convert img to temporary HSV object
    Image<Hsv, Byte> imgHSV = img.Convert<Hsv, Byte>();

    // break down HSV
    Image<Gray, Byte>[] channels = imgHSV.Split();
    Image<Gray, Byte> imgHSV_saturation = channels[1];   // saturation channel
    Image<Gray, Byte> imgHSV_value      = channels[2];   // value channel

    //use the saturation and value channel to filter noise. [you will need to tweak these values]
    Image<Gray, Byte> saturationFilter = imgHSV_saturation.InRange(new Gray(0), new Gray(80));
    Image<Gray, Byte> valueFilter = imgHSV_value.InRange(new Gray(200), new Gray(255));

    // combine the filters to get the final image to process.
    Image<Gray, byte> imgTarget = huefilter.And(saturationFilter);

    return imgTarget;
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜