svm for binary data with hamming distance
I have a standard {-1,+1} machine learning problem. The main difference is that the data points are binary 开发者_如何学JAVAstrings, so their prooximity is measured by Hamming distance. Can SVM be applied in this case? What SVM library is suited better for this task?
If a kernel k is positive definite for any pair of examples x and z the determinant of the gram matrix is non negative.
|k(x, x) k(x, z)|
| | = k(x,x)k(z,z) - k(x,z)^2 >= 0
|k(z, x) k(z, z)|
For a distance (hamming distance included) the following properties hold:
For any x, y:
1) d(x, z) >= 0 and d(x, z) = 0 <=> x = z
2) symmetry d(x, z) = d(z, x)
3) triangular inequality d(x, z) <= d(x, y) + d(y, z)
Considering k to be the hamming distance, according to 1) we would have:
a) k(x,x) = k(z,z) = 0
But in order to be a positive definite kernel we need:
b) k(x,x)k(z,z) - k(x,z)^2 >= 0
applying a) to b) we have:
-k(x,z)^2 >= 0
k(x,z)^2 <= 0
which means that k(x,z) is not a real value and thus it is not a valid kernel.
Unless I'm missing something, I think it is a valid kernel, because it is an inner product in the following space: K("aab","baa") = [0,1,0,1,1,0] \dot [1,0,0,1,0,1].
This is a nice way to define a feature for a kernel, but it is not the hamming distance. The hamming distance between "aab" and "baa" is 2 the first and the third character are different. but
[0,1,0,1,1,0] \dot [1,0,0,1,0,1] = 1.
If the hamming instance is not positive definite it doesn't mean that it can't be used with SVM, but for sure you loose the benefits of solving a convex optimization problem.
This is probably best handled by using a SVM library that allows you to create a custom kernel function (e.g. libSVM, SVMLight, scikits). Then you would have to write a Hamming distance function to compute the distance between two strings and plug it in as the kernel function.
The only problem is, I'm not sure Hamming distance actually is a kernel, as in it satisfies Mercer's conditions. It's obviously symmetric, but I don't know whether it's positive definite.
This paper proposes a kernel for Hamming distances measured between categorical features. It's simply a matter of replacing the Euclidean distance in the standard exponential kernel with Hamming.
It is also possible to combine Euclidean and Hamming distances into one kernel, which would be good for datasets with a mix of continuous and discrete variables.
The good news is that they also prove that this kernel is indeed positive definite (on page 14).
Like StompChicken says it is unclear that the Hamming distance is a valid kernel.
Unless I'm missing something, I think it is a valid kernel, because it is an inner product in the following space: K("aab","baa") = [0,1,0,1,1,0] \dot [1,0,0,1,0,1].
After understanding this "encoding" you can really use any SVM library that supports a linear kernel, an encode your strings like in the previous example.
精彩评论