开发者

Why create your own Haar-classifier cascades?

I found this tutorial on creating your own haar-classifier cascades.

This raised the question with me: what are the advantages, if 开发者_运维百科any, of running HaarTraining, and creating your own classifier (as opposed to using the cascades provided by OpenCv)?


Haar or LBP cascades classifiers are common technique used for detection or rigid objects. So here are two major points for training your own cascade:

  1. Cascades coming with OpenCV do not cover all possible object types. So you can use one of OpenCV cascades if you are going to create a face-detection application but there no ready to use cascades if you need to detect for example dogs.

  2. And cascades from OpenCV are good but they are not the best possible. It is a challenging task but it is possible to train a cascade that will have higher detection rate and produce less false-positives and false-negatives.

And one major remark: haartraining application used in your tutorial is now considered as deprecated by OpenCV team. opencv_traincascade is a newer version and it has 2 important features: it supports LBP features and it supports multi-threading (TBB). Typical difference looks like this

haartraining + singlecore > 3 weeks for one classifier.
traincascades + multicore < 30 minutes for one classifier.

But the worst of all I don't know any good tutorials explaining usage of opencv_traincascade. See this thread for details.


I can give you a Linux example. The code and techniques were pulled from a variety of sources. It follows this example but with a python version of mergevec, so you don't have to compile the mergevec.cpp file.

Assuming that you have two folders with cropped & ready positive & negative images (.png files in this example), you create two text files with all the image names in:

find positive_images -iname "*.png" > positives.txt
find negative_images -iname "*.png" > negatives.txt

Then, using the createsamples.pl script provided by Naotoshi Seo (in the OpenCV/bin folder), which takes the two text files and an output folder, and creates the .vec files:

perl createsamples.pl positives.txt negatives.txt 'output' 1500 "opencv_createsamples -bgcolor 0 -bgthresh 0 -maxzangle 0.5 -w 50 -h 50"

Follow that with a python script created by Blake Wulfe called mergevec.py, which will create an output.vec file by combining all the .vec files in the subfolder

python mergevec.py -v samples -o output.vec

Assuming that is all done, using opencv_traincascade as follows should help:

opencv_traincascade -data classifier -vec output.vec -bg negatives.txt \
  -numStages 10 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 200 \
  -numNeg 400 -w 50 -h 50 -mode ALL

If all that goes well, use your newly created cascade (classifier/cascade.xml) with something like facedetect.py from opencv samples:

opencv-3.0.0-rc1/samples/python2/facedetect.py --cascade classifier/cascade.xml test_movie.mp4
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜