Comparing two histograms
For a small project, I need to compare one image with another - to determine if the images are approximately the same or not. The images are smallish, varying from 25 to 100px across. The images are meant to be of the same picture data but are sublty different, so a simple pixel equality check won't work. Consider these two possible scenarios:
- A security (CCTV) camera in a museum looking at an exhibit: we want to quickly see if two different video frames show the same scene, but slight differences in lighting and camera focus means they won't be identical.
- A picture of a vector computer GUI icon rendered at 64x64 compared to the same icon rendered at 48x48 (but both images would be scaled down to 32x32 so the histograms have the same total pixel count).
I've decided to represent each image using histograms, using three 1D histograms: one for each RGB channel - it's safe for me to just use colour and to ignore texture and edge histograms (An alternative approach uses a single 3D histogram for each image, but I'm avoiding that as it adds extra complexity). Therefore I will need to compare the histograms to see how similar they are, and if the similarity measure passes some threshold value then I can say with confidence the respective images are visually the same - I would be comparing each image's corresponding channel histograms (e.g. image 1's red histogram with image 2's red histogram, then image 1's blue histogram with image 2's blue histogram, then the green histograms - so I'm not comparing image 1's red histogram with image 2's blue histogram, that would just be silly).
Let's say I have these three histograms, which represent a summary of the red RGB channel for three images (using 5 bins for 7-pixel images for simplicity):
H1 H2 H3
X X X
X X X X X
X X X X X X X X X X X X X
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
H1 = [ 1, 3, 0, 2, 1 ]
H2 = [ 3, 1, 0, 1, 2 ]
H3 = [ 1, 1, 1, 1, 3 ]
Image 1 (H1
) is my reference image, and I want to see if Image 2 (H2
) and/or Image 3 (H3
) is similar to Image 1. Note that in this example, Image 2 is similar to Image 1, but Image 3 is not.
When I did a cursory search for "histogram difference" algorithms (at least those I could understand) I found a popular approach was to just sum the differences between each bin, however this approach often fails because it weighs all bin differences the same.
To demonstrate the problem with this approach, in C# code, like this:
Int32[] image1RedHistogram = new Int32[] { 1, 3, 0, 2, 1 };
Int32[] image2RedHistogram = new Int32[] { 3, 2, 0, 1, 2 };
Int32[] image3RedHistogram = new Int32[] { 1, 1, 1, 1, 3 };
Int32 GetDifference(Int32[] x, Int32[] y) {
Int32 sumOfDifference = 0;
for( int i = 0; i < x.Length; i++ ) {
sumOfDifference += Math.Abs( x[i] - y[i] );
}
return sumOfDifferences;
}
The output of which is:
开发者_运维知识库GetDifference( image1RedHistogram, image2RedHistogram ) == 6
GetDifference( image1RedHistogram, image3RedHistogram ) == 6
This is incorrect.
Is there a way to determine the difference between two histograms that takes into account the shape of the distribution?
Comparing histograms is quite a subject in itself.
You've got two big classes of comparison functions : bin-to-bin comparison and cross-bin comparison.
- Bin-to-bin comparison : As you stated, standard sum of differences is quite bad. There's an improvement: the Chi-squared distance. If
H1.red[0] = 0.001
andH2.red[0] = 0.011
, thenH2.red[0]
is much more important than ifH1.red[0] = 0.1
andH2.red[0] = 0.11
, even though in both cases|H1.red[0] - H2.red[0]| = 0.01
. - Cross-bin comparison : A standard example called the bin-similarity matrix requires some similarity matrix
M
where inM(i,j)
is the similarity between the binsi
andj
. Assumebin[i]
is red. Ifbin[j]
is dark red, thenM(i,j)
is large. Ifbin[j]
is green,M(i,j)
is small. Then, the distance between histogramsH1
andH2
would besqrt((H1-H2)*M*(H1-H2))
. This method takes in account what you've said about "close" bins! Earth Moving Distance (EMD) is another kind of cross-bin distance.
To finish, I've got three points :
- You should read this paper on histogram distance. It's quite easy and introduces you to histogram distances. All the distances I talked about are summed up nicely in chapter 1. Honestly, the final thing described in the article is not that complex, but it's probably overkill for your case.
- Cross-bin distance is very good, but can be costly (i.e : long to compute, because it involves a matrix, thus is O(n^2)). The simplest way to circumvent the expensive cross-bin computation (and it is widely done) is to do some soft assignment : if a pixel is red, then you should fill ALL the bins that are remotely looking like red (of course, giving more weight to the closest colors). Then you can use a bin-to-bin algorithm.
- A bit more math-centric : the previous point was all about reducing a cross-bin comparison to a bin-to-bin comparison. In fact, it consists of implicitly diagonalizing the similarity matrix M. If you can diagonalize
M = P'*D*P
whereP'
is the transpose ofP
, thensqrt((H1-H2)'*M*(H1-H2)) = sqrt((H1-H2)'*P'*D*P*(H1-H2)) = sqrt((P(H1-H2))'*D*(P(H1-H2)))
. Depending on how trivial it is for you to computeP(H1-H2)
, this can save you computation time. Intuitively, ifH1
is your original histogram,P*H1
is a soft assignment and you are using the implicit similarity matrixM = P'*Id*P
.
I'm surprised that no one mentioned opencv implementation of the histogram comparison, and can easily handle multichannel images (grayscale, rgb, rgba, etc) of different format (uchar, float, double, etc)
Includes the Bhattacharyya distance, Chi-Square, correlation and intersection methods. You can find the
compareHist(InputArray H1, InputArray H2, int method)
function in the manual here.
Earth Mover's Distance (EMD) is often used for this type of histogram comparison. EMD uses a value that defines the cost in 'moving' pixels from one bin of the histogram to another, and provides the total cost in transforming a specific histogram to a target one. The farther away a bin is, the higher the cost.
In your example, moving 5 units from red[0] to red1 would cost (c*1*5)
while moving 5 units from red[0] to red[10] would cost (c*10*5)
.
There are several implementations out there. FastEMD has code in C++, Java and Matlab. I believe OpenCV has some support as well.
There are many papers published using this technique for large image database similarity searching.
I find the chi-squared test to be a good place to start when comparing histograms. If you don't have the same number of entries in each histogram you have to be a bit more careful as you can't use the 'normal' expression. From memory, if you assume that the histograms have unequal numbers of entries the the chi-square test generalises to
1/(MN) SUM_i[((Mni - Nmi)^2)/(mi+ni)].
M and N are the total number of entries in each histogram, mi is the number of entries in bin i of histogram M and ni is the number of entries in bin i of histogram N.
Another test is the Kolmogorov-Smirnov test. This test looks at the maximum difference between the cumulative probability distributions of the two histograms. This is harder to implement, I think numerical recipes in C has a code snippet in C and im pretty sure its in Matlab. If you more interested in the difference is histogram shape and not so much the exact values this may be a better test also its non-parametric.
You basically want to look a probability distances. There are many and you have to decide which is right for your application. Lately, I've had luck with Chi-squared and Kullback-Leibler.
Normalize your histograms by dividing the value in each bin in an incoming histogram by the total number of pixels the histogram is based on. Then use @tkerwin 's EMD.
As others have mentioned, the Earth Mover's Distance or EMD (aka Wasserstein metric) is probably the optimal solution. The Shortlist Method for fast EMD computation is available in the R package, transport. It was introduced in a paper from 2014 comparing it to other methods, showing faster computation times. The only drawback is that it's in R, which isn't fast unless programmed in C++ under the hood.
I think EMD is good solution to resolve cross-bin problem compares with bin to bin method. However, as some mentions, EMD takes a very long time.
精彩评论