开发者

Background subtraction in OpenCV(C++)

I want to implement a background averaging method. I have 50 frames of images taken in one second and some of the frames contain lightning which I want to extract as the foreground. The frames are taken with a stationary camera and 开发者_StackOverflowthe frames are taken as grayscales. What I want to do is:

  1. Get the background model
  2. After, compare each frame to the background model to determine whether there is lighting in that frame or not.

I read some documents on how this can possible be done by using cvAcc() but am having a difficulty understanding how this can be done. I would appreciate a piece of code which guide me and links to documents that can help me understand how I can implement this.

Thanking you in advance.


We had the same task in one of our projects.

To get the background model, we simply create a class BackgroundModel, capture the first (lets say) 50 frames and calculate the average frame to avoid pixel errors in the background model.

For example, if you get an 8-bit greyscale image (CV_8UC1) from your camera, you initialize your model with CV_16UC1 to avoid clipping.

cv::Mat model = cv::Mat(HEIGHT, WIDTH, CV_16UC1, cv::Scalar(0));

Now, waiting for the first frames to calculate your model, just add every frame to the model and count the amount of received frames.

void addFrame(cv::Mat frame) {
    cv::Mat convertedFrame;
    frame.convertTo(convertedFrame, CV_16UC1);
    cv::add(convertedFrame, model, model);
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        createMask();
    }
}

The createMask() function calculates the average frame which we use for the model.

void createMask() {
    cv::convertScaleAbs(model, mask, 1.0 / learnedFrames);
    mask.convertTo(mask, CV_8UC1);
}

Now, you just send all the frames the way through the BackgroundModel class to a function subtract(). If the result is an empty cv::Mat, the mask is still calculated. Otherwise, you get a subtracted frame.

cv::Mat subtract(cv::Mat frame) {
    cv::Mat result;
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        cv::subtract(frame, mask, result);
    }
    else {
        addFrame(frame);
    }
    return result;
}

Last but not least, you can use Scalar sum(const Mat& mtx) to calculate the pixel sum and decide if it's a frame with lights on it.


MyPolygon function mask the ROI and after that, it calculates the abs Pixel difference and calculates the number of white pixels.
srcImage : Reference image.

#include <opencv2/opencv.hpp>
#include <iostream>
#include <random>


using namespace std;
using namespace cv;

cv::Mat MyPolygon( Mat img )
{
  int lineType = 8;
// [(892, 145), (965, 150), (933, 199), (935, 238), (970, 248), (1219, 715), (836, 709), (864, 204)]

  /** Create some points */
  Point rook_points[1][8];
  rook_points[0][0] = Point(892, 145);
  rook_points[0][1] = Point(965, 150);
  rook_points[0][2] = Point(933, 199);
  rook_points[0][3] = Point(935, 238);
  rook_points[0][4] = Point(970, 248);
  rook_points[0][5] = Point(1219, 715);
  rook_points[0][6] = Point(836, 709);
  rook_points[0][7] = Point(864, 204);

  const Point* ppt[1] = { rook_points[0] };
  int npt[] = { 8 };

  cv::Mat mask = cv::Mat::zeros(img.size(), img.type());

  fillPoly( mask,
            ppt,
            npt,
            1,
            Scalar( 255, 0, 0 ),
            lineType
            );

    cv::bitwise_and(mask,img, img);
    
    return img; 
 }

 int main() {
    /* code */
    cv::Mat srcImage = cv::imread("/home/gourav/Pictures/L1 Image.png", cv::IMREAD_GRAYSCALE);
    resize(srcImage, srcImage, Size(1280, 720));
    // cout << " Width : " << srcImage.cols << endl;
    // cout << " Height: " << srcImage.rows << endl;

    if (srcImage.empty()){
        std::cerr<<"Ref Image not found\n";
        return 1;
    }
    cv::Mat img = MyPolygon(srcImage);
    
    Mat grayBlur;
    GaussianBlur(srcImage, grayBlur, Size(5, 5), 0);

    VideoCapture cap("/home/gourav/GenralCode/LD3LF1_stream1.mp4"); 
    Mat frames;
    if(!cap.isOpened()){

        std::cout << "Error opening video stream or file" << endl;

        return -1;
    }
    while (1)
    {
        cap >> frames;
        if (frames.empty())
            break;
        
        // Convert current frame to grayscale
        cvtColor(frames, frames, COLOR_BGR2GRAY);

        // cout << "Frame Width : " << frames.cols << endl;
        // cout << "Frame Height: " << frames.rows << endl;

        Mat imageBlure;
        GaussianBlur(frames, imageBlure, Size(5, 5), 0);

        cv::Mat frame = MyPolygon(imageBlure);

        Mat dframe;
        absdiff(frame, grayBlur, dframe);
        
        // imshow("grayBlur", grayBlur);

        // Threshold to binarize
        threshold(dframe, dframe, 30, 255, THRESH_BINARY);        
        
        //White Pixels
        int number = cv::countNonZero(dframe);
        cout<<"Count: "<< number <<"\n";
        if (number > 3000)
        {
            cout<<"generate Alert ";
        }
        // Display Image
        imshow("dframe", dframe);

        char c=(char)waitKey(25);
        if (c==27)
            break;
    }
    cap.release();
    return 0;

 }
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜