开发者

creating a train perceptron in MATLAB for gender clasiffication

I am coding a perceptron to learn to categorize gender in pictures of faces. I am very very new to MATLAB, so I need a lot of help. I have a few questions:

  1. I am trying to code for a function:

    function [y] = testset(x,w)  
    %y = sign(sigma(x*w-threshold))
    

    where y is the predicted results, x is the training/testing set put in as a very large matrix, and w is weight on the equation. The part after the % is what I am trying to write, but I do not know how to write this in MATLAB code. Any ideas out there?

  2. I am trying to code a second function:

    function [err] = testerror(x,w,y)  
    %err = sigma(max(0,-w*x*y))
    

    w, x, and y have the same values as stated above, and err is 开发者_StackOverflow中文版my function of error, which I am trying to minimize through the steps of the perceptron.

  3. I am trying to create a step in my perceptron to lower the percent of error by using gradient descent on my original equation. Does anyone know how I can increment w using gradient descent in order to minimize the error function using an if then statement?

I can put up the code I have up till now if that would help you answer these questions.

Thank you!

edit--------------------------

OK, so I am still working on the code for this, and would like to put it up when I have something more complete. My biggest question right now is:

I have the following function:

function [y] = testset(x,w)  
y = sign(sum(x*w-threshold))

Now I know that I am supposed to put a threshold in, but cannot figure out what I am supposed to put in as the threshold! any ideas out there?

edit----------------------------

this is what I have so far. Changes still need to be made to it, but I would appreciate input, especially regarding structure, and advice for making the changes that need to be made!

function [y] = Perceptron_Aviva(X,w)
y = sign(sum(X*w-1));
end

function [err] = testerror(X,w,y)
    err = sum(max(0,-w*X*y));
end

%function [w] = perceptron(X,Y,w_init)
%w = w_init;
%end

%------------------------------

% input samples
X = X_train;

% output class [-1,+1];
Y = y_train;

% init weigth vector
w_init = zeros(size(X,1));
w = w_init;


%---------------------------------------------
loopcounter = 0

while abs(err) > 0.1 && loopcounter < 100

    for j=1:size(X,1)

        approx_y(j) = Perceptron_Aviva(X(j),w(j))

        err = testerror(X(j),w(j),approx_y(j))

        if err > 0 %wrong (structure is correct, test is wrong)
            w(j) = w(j) - 0.1 %wrong
        elseif err < 0 %wrong
            w(j) = w(j) + 0.1 %wrong
        end

       % -----------
       % if sign(w'*X(:,j)) ~= Y(j) %wrong decision?
       %      w = w + X(:,j) * Y(j);   %then add (or subtract) this point to w
    end


you can read this question I did some time ago.

I uses a matlab code and a function perceptron

function [w] = perceptron(X,Y,w_init)

w = w_init;
for iteration = 1 : 100  %<- in practice, use some stopping criterion!
  for ii = 1 : size(X,2)         %cycle through training set
    if sign(w'*X(:,ii)) ~= Y(ii) %wrong decision?
      w = w + X(:,ii) * Y(ii);   %then add (or subtract) this point to w
    end
  end
  sum(sign(w'*X)~=Y)/size(X,2)   %show misclassification rate
end

and it is called from code (@Itamar Katz) like (random data):

% input samples
X1=[rand(1,100);rand(1,100);ones(1,100)];   % class '+1'
X2=[rand(1,100);1+rand(1,100);ones(1,100)]; % class '-1'
X=[X1,X2];

% output class [-1,+1];
Y=[-ones(1,100),ones(1,100)];

% init weigth vector
w=[.5 .5 .5]';

% call perceptron
wtag=perceptron(X,Y,w);
% predict
ytag=wtag'*X;


% plot prediction over origianl data
figure;hold on
plot(X1(1,:),X1(2,:),'b.')
plot(X2(1,:),X2(2,:),'r.')

plot(X(1,ytag<0),X(2,ytag<0),'bo')
plot(X(1,ytag>0),X(2,ytag>0),'ro')
legend('class -1','class +1','pred -1','pred +1')

I guess this can give you an idea to make the functions you described. To the error compare the expected result with the real result (class)


Assume your dataset is X, the datapoins, and Y, the labels of the classes.

f=newp(X,Y)

creates a perceptron.

If you want to create an MLP then:

f=newff(X,Y,NN)

where NN is the network architecture, i.e. an array that designates the number of neurons at each hidden layer. For example

NN=[5 3 2]

will correspond to an network with 5 neurons at the first layers, 3 at the second and 2 a the third hidden layer.


Well what you call threshold is the Bias in machine learning nomenclature. This should be left as an input for the user because it is used during training.

Also, I wonder why you are not using the builtin matlab functions. i.e newp or newff. e.g.

ff=newp(X,Y)

Then you can set the properties of the object ff to do your job for selecting gradient descent and so on.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜