How to distinguish photo from text image
I am writing OCR recognition program. It works fine with scanned texts, however, there are two problems:
- It gives false positives on photos (some rubbish random text like "bkigopes")
- It works quite slow
The goal is to find all images with text and extract this text. So, given men开发者_如何学Gotioned problems, I need to quickly reject photos. I hope that there is some mathematical (statistical) method like calculating some median numbers, which can easily determine colorful image without any obvious structure like scanned text has.
Such method/formula should not be cheated by special kind of images, e.g. text with screenshots or magazine pages with lots of text and pictures inside. Also, colorful text (e.g. red on yellow) should not be rejected.
Has anyone experience with such problem? Any ideas or ready solutions?
I have no prior knowledge/experience in this area whatsoever, but as a complete guess:
Would an entropy calculation work?
If something has high entropy then it's likely to be an image; if low, it's likely to be something more like text.
Hope that helps a little...
In general this is be quite a difficult task. However given your particular application, perhaps you can make assumptions as to the input of your OCR program.
You mentioned "scanned texts". So I'm assuming this will not be applied to pictures of bulletin boards along the roads and needing to recognize text on the bulletin board in the midst of a scenic background. This implies that the range of colors is low and the contrast is high.
On the other hand, a photo typically has a very large range of colors with relatively low contrast between neighboring pixels. Of course, this assumption can easily invalidated given the many styles of photography.
So I think the first thing you could try is convert the image into black and white (not grayscale). Then look at the relative proportions of the two colors. I think a photo will be much more evenly split than a scanned document. The algorithm you use to convert the photo should be resistant to outliers so perhaps using some sort of median would be good as a threshold.
First of all, since magazine pages are a mix, you won't find a single technique that will take an entire image and make a determination. Some kind of segmentation will be needed. If it was me, I'd look for bands of pixels both horizontally and vertically that show low variance, and then use those to divide the image into a grid. Then you can test each cell in the grid and remove those which are photos.
Now for the photo test. Like @Mehrdad's entropy approach, you can try compression for a task like this. Different compression algorithms work differently, but a lossless Lempel-Ziv-Welch-style or equivalent compression algorithm ought to compress images of text more than photos. Measuring the size difference between uncompressed and compressed versions would estimate the entropy nicely. After all, entropy is the measure of the best possible lossless compression. With a bit of empirical work this could provide a reasonably reliable classification technique.
精彩评论