开发者

Adaptive threshold Binarization's bad effects

I implemented some adaptive binarization methods, they use a small window and at each pixel the threshold value is calculated. There are problems with these methods: If we select the window size too small we will get this effect (I think the reason is because of window size i开发者_StackOverflow中文版s small)

Adaptive threshold Binarization's bad effects

(source: piccy.info)

At the left upper corner there is an original image, right upper corner - global threshold result. Bottom left - example of dividing image to some parts (but I am talking about analyzing image's pixel small surrounding, for example window of size 10X10). So you can see the result of such algorithms at the bottom right picture, we got a black area, but it must be white. Does anybody know how to improve an algorithm to solve this problem?


There shpuld be quite a lot of research going on in this area, but unfortunately I have no good links to give.

An idea, which might work but I have not tested, is to try to estimate the lighting variations and then remove that before thresholding (which is a better term than "binarization"). The problem is then moved from adaptive thresholding to finding a good lighting model.

If you know anything about the light sources then you could of course build a model from that.

Otherwise a quick hack that might work is to apply a really heavy low pass filter to your image (blur it) and then use that as your lighting model. Then create a difference image between the original and the blurred version, and threshold that.

EDIT: After quick testing, it appears that my "quick hack" is not really going to work at all. After thinking about it I am not very surprised either :)

I = someImage
Ib = blur(I, 'a lot!')
Idiff = I - Idiff
It = threshold(Idiff, 'some global threshold')

EDIT 2 Got one other idea which could work depending on how your images are generated. Try estimating the lighting model from the first few rows in the image:

  1. Take the first N rows in the image
  2. Create a mean row from the N collected rows. You know have one row as your background model.
  3. For each row in the image subtract the background model row (the mean row).
  4. Threshold the resulting image.

Unfortunately I am at home without any good tools to test this.


It looks like you're doing adaptive thresholding wrong. Your images look as if you divided your image into small blocks, calculated a threshold for each block and applied that threshold to the whole block. That would explain the "box" artifacts. Usually, adaptive thresholding means finding a threshold for each pixel separately, with a separate window centered around the pixel.

Another suggestion would be to build a global model for your lighting: In your sample image, I'm pretty sure you could fit a plane (in X/Y/Brightness space) to the image using least-squares, then separate the pixels into pixels brighter (foreground) and darker than that plane (background). You can then fit separate planes to the background and foreground pixels, threshold using the mean between these planes again and improve the segmentation iteratively. How well that would work in practice depends on how well your lightning can be modeled with a linear model.

If the actual objects you try to segment are "thinner" (you said something about barcodes in a comment), you could try a simple opening/closing operation the get a lighting model. (i.e. close the image to remove the foreground pixels, then use [closed image+X] as threshold).

Or, you could try mean-shift filtering to get the foreground and background pixels to the same brightness. (Personally, I'd try that one first)


You have very non-uniform illumination and fairly large object (thus, no universal easy way to extract the background and correct the non-uniformity). This basically means you can not use global thresholding at all, you need adaptive thresholding.

You want to try Niblack binarization. Matlab code is available here http://www.uio.no/studier/emner/matnat/ifi/INF3300/h06/undervisningsmateriale/week-36-2006-solution.pdf (page 4). There are two parameters you'll have to tune by hand: window size (N in the above code) and weight.


Try to apply a local adaptive threshold using this procedure:

  1. convolve the image with a mean or median filter
  2. subtract the original image from the convolved one
  3. threshold the difference image

The local adaptive threshold method selects an individual threshold for each pixel.

I'm using this approach extensively and it's working fine with images having non uniform background.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜