开发者

Fitting Gaussian KDE in numpy/scipy in Python

I am fitting a Gaussian kernel density estimator to a variable that is t开发者_如何学运维he difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as:

class gaussian_kde_covfact(stats.gaussian_kde):
    def __init__(self, dataset, covfact = 'scotts'):
        self.covfact = covfact
        scipy.stats.gaussian_kde.__init__(self, dataset)

    def _compute_covariance_(self):
        '''not used'''
        self.inv_cov = np.linalg.inv(self.covariance)
        self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n

    def covariance_factor(self):
        if self.covfact in ['sc', 'scotts']:
            return self.scotts_factor()
        if self.covfact in ['si', 'silverman']:
            return self.silverman_factor()
        elif self.covfact:
            return float(self.covfact)
        else:
            raise ValueError, \
                'covariance factor has to be scotts, silverman or a number'

    def reset_covfact(self, covfact):
        self.covfact = covfact
        self.covariance_factor()
        self._compute_covariance()

This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error:

 File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance
    self.inv_cov = linalg.inv(self.covariance)
  File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv
    if info>0: raise LinAlgError, "singular matrix"
numpy.linalg.linalg.LinAlgError: singular matrix

What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else.

thanks.


A density peaked whose mass is at one point is not Gaussian, so strictly speaking, what you want to do is undefined (and such distribution does not have a finite covariance).

Now, in your case, for a vector which is all zero, you could special-case it, bypassing the whole infrastructure. A simple way to detect the case is to compute the max of the diff and compare this to eps (numpy.finfo(x.dtype).eps for the vector x). You could also simply detect it by catching the LinalgError, but you would have to be careful to differentiate the cases where covariance is ill defined and 0 entries.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜