Equations for outputs of nodes in hidden and output layers of a Neural Network
Hey guys, I'm new to neural networks.. I want to know how to come up with equations for outputs of nodes in the hidden and output layers of a neural network. I would like to know the answer to the below and how you did it. I haven't been able to find any approchable reading material on this either.
Assume I ha开发者_开发技巧ve a binary classification problem. Assume that I have a multi-layer neural network with one hidden layer. Assume that I have a sigmoid activation function given by f(x)=1/(1+e^-z)
. Does anyone know how I find the equation for the output of the nodes in the hidden layer and the output of the nodes in the output layer?
Thanks guys, any help would be great.
I reduced a three-layer NN to a set of equations (1 input node, 3 hidden nodes, 1 output node), and I ended up with those shown in the image. (Note: I'm assuming the image upload worked - they are blocked by company's morality filter).
- I labeled the output of each node as o, subscripted as {layer,neuron}.
- The weights were labeled as w with subscripts indicating {to_layer,neuron} and superscripts indicating {from_layer,neuron}.
- The bias terms b were subscripted as {layer, neuron}
As shown, the scaled NN input (Cet) was formulated as the output of the node on layer 1 (labeled as Eqn 3 in the pic). My sigmoidal activation function resembled yours (Eqn 4). From there, the output of layer 2, node 1 was computed (Eqn 5), then output of layer 2, node 2 (Eqn 6), then output of layer 2, node 3 (Eqn 7).
The output (BISt in my pic) was then computed as the weighted sum of the hidden layer activations - which was then passed through the activation function.
This strategy worked well for my application.
精彩评论