Regression in Matlab assuming Student's t Distributed Error Terms
I see that it is possible to use regress/regstats for OLS, and I found an online implementation of L1-Regression (Laplace), but I can't quite seem to figure out how to implement t distributed error terms. I have tried maximizing the log-likelihood of the residuals, but don't seem to be coming up with the right answer.
classdef student < handle
methods (Stat开发者_JAVA技巧ic)
% Find the sigma that maximizes the Log Liklihood function given a B
function s = findLonS(r,df)
n = length(r);
% if x ~ t location, scale distribution with df
% degrees of freedom, then (x-u)/sigma ~ t(df)
f = @(s) -sum(log(tpdf(r ./ s, df)));
s = fminunc(f, (r'*r)/n);
end
function B = regress(X,Y,df)
[n,m] = size(X);
bInit = ones(m, 1);
r = (Y - X*bInit);
s = student.findLonS(r, df);
% if x ~ t location, scale distribution with df
% degrees of freedom, then (x-u)/sigma ~ t(df)
f = @(b) -sum(log(tpdf((Y - X*b) ./ s, df)));
options = optimset('MaxFunEvals', 10000, 'TolX', 1e-16, 'TolFun', 1e-16);
[B, fval] = fminunc(f, bInit, options);
end
end
end
Comparing to an R implementation (which I know has been tested and is accurate), the solutions I am getting to this is wrong.
Any suggestions for fixing or ideas where I could find a solution already available?
my guess would be you have to adjust the scale s
for the given b
. This would either mean doing something like alternatively optimizing b
, then adjusting s
, and optimizing b
again, or possibly rewriting your objective as
f = @(b)(-sum(log(tpdf((Y-X*b) ./ student.findLonS(Y-X*b,df),df))));
精彩评论