I\'ve read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive\' models) have more com
Does increasing the number of test cases training data in case of Precision Neural Networks may led to problems (like over开发者_开发知识库-fitting for example)..?
There\'s various activation functions: sigmoid, tanh, etc.And there\'s also a few initializer functions: Nguyen and Widrow, random, normalized, constant, zero, etc.So do these have much effect on the
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing
I\'m trying to determine how to transform my \"meaningful input\" into data for an Artificial Neural Network and how to turn the output into \"meaningful output\".
Can anyone all the different techniques used in face detection? Techniques like neural networks, support vector machines,开发者_StackOverflow eigenfaces, etc.
edit: A more pointed question: What is the derivative of softmax to be used in my gradient descent? This is more or less a research project for a course, and my understanding of NN is very/fairly
Closed. This qu开发者_JAVA技巧estion is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow.
I\'m using ruby\'s ai4r gem, building a neural network.Version 1.1 of the gem allowed me to simply do a Marshal.dump(network) to a file, and I could load the network back up whenever I wanted.
I tried to write a Neural Network system, but even running through simple AND/OR/NOR type problems, the outputs seem to group around 0.5 (for a bias of -1) and 0.7 (for a bias of 1).