Can you help me with linear activation of my Simple Classifier Neural Network in pyBrain?
I'm trying a very simple case using a Python library called pyBrain and I can't get it to work. There is likely to be a very simple reason, so, I hope someone can help!
1) A simple XOR works fine.
2) Classifying the led's displayed on a digital clock to the numerical output value works fine.
e.g.
[ 1. 1. 1. 0. 1. 1. 1.] => [ 0.]
[ 0. 0. 1. 0. 0. 1. 0.] => [ 1.]
[ 1. 0. 1. 1. 1. 0. 1.] => [ 2.]
[ 1. 0. 1. 1. 0. 1. 1.] => [ 3.]
[ 0. 1. 1. 1. 0. 1. 0.] => [ 4.]
[ 1. 1. 0. 1. 0. 1. 1.] => [ 5.]
[ 1. 1. 0. 1. 1. 1. 1.] => [ 6.]
[ 1. 0. 1. 0. 0. 1. 0.] => [ 7.]
[ 1. 1. 1. 1. 1. 1. 1.] => [ 8.]
[ 1. 1. 1. 1. 0. 1. 1.] => [ 9.]
3) Classifying a numerical value to the led output to drive a digital display doesn't work.
e.g.
[ 0.] => [ 1. 1. 1. 0. 1. 1. 1.]
etc etc (as above but reversed).
I'm using a simple linear activator with 10 inputs, 1 output and i've tried >12 neurons in the hidden layer.
My confusion is that, shouldn't the 开发者_如何学Pythonnetwork be able to remember the pattern with 10 neurons in the hidden layer?
I'm sure there is something obvious I'm missing, so, please feel free to enlighten my stupidity!
A linear activation is fine when you're doing regression (single output node representing a range of values) but for classification (binary outputs representing matches) you're better off using an activation that limits the range of values. Something like a sigmoid or tanh.
I think rather than SO, MetaOptimize might help you more.
I had taken only an introductory class in ML. But from what I remember, neural networks work for many problems, but they are like black boxes. If they do not work, it is difficult to determine why they are not working. Specifically, there are no specific rules as to the number of nodes in the hidden layer (there are rules of thumb for some problems).
精彩评论