开发者

Is there a version of backward error propogation where the output is a probability?

I want to create a feed-forward neural network where the outputs in the training data are boolean, 0 or 1.

In use, however, I want the output to be the expected probability that that input would have produced a 0 or a 1.

Given that the most common forms of backprop employ a sigmoid activation function, , it seems unlikely that this would result in actual probabilities as output (the开发者_StackOverflow sigmoid curve doesn't really seem like a "probabilistic" function - sorry, I know this is hand-wavey).

Or perhaps I'm wrong. Can the outputs of a feed-forward neural net, using sigmoid activation functions, and trained using backprob be safely treated as the actual probability of getting a 1 as opposed to a 0?


Yes, this will work in the way that you want it if you use a standard sigmoid activation function.

The maths that proves this is a little complicated, but it effectively boils down to the fact that you are training the sigmoid function to generate the average value of the output training set (this is a consequence of using the squared error function in normal backprop). Since the two possibles values are 0 and 1, the average value is therefore exactly equal to the probability of getting a 1.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜