开发者

Which optimization algorithm should I use to optimize the weights of a multilayer perceptron?

Actually these are 3 questions:

Which optimization algorithm should I use to optimize the weights of a multilayer perceptron, if I knew...

1) only the value of the error function? (blackbox)

2) the gradient? (first derivative)

3) the gra开发者_StackOverflow社区dient and the hessian? (second derivative)

I heard CMA-ES should work very well for 1) and BFGS for 2) but I would like to know if there are any alternatives and I don't know wich algorithm to take for 3).


Ok, so this doesn't really answer the question you initially asked, but it does provide a solution to the problem you mentioned in the comments.

Problems like dealing with a continuous action space are normally not dealt with via changing the error measure, but rather by changing the architecture of the overall network. This allows you to keep using the same highly informative error information while still solving the problem you want to solve.

Some possible architectural changes that could accomplish this are discussed in the solutions to this question. In my opinion, I'd suggest using a modified Q-learning technique where the state and action spaces are both represented by self organizing maps, which is discussed in a paper mentioned in the above link.

I hope this helps.


I solved this problem finally: there are some efficient algorithms for optimizing neural networks in reinforcement learning (with fixed topology), e. g. CMA-ES (CMA-NeuroES) or CoSyNE.

The best optimization algorithm for supervised learning seems to be Levenberg-Marquardt (LMA). This is an algorithm that is specifically designed for least square problems. When there are many connections and weights, LMA does not work very well because the required space is huge. In this case I am using Conjugate Gradient (CG).

The hessian matrix does not accelerate optimization. Algorithms that approximate the 2nd derivative are faster and more efficient (BFGS, CG, LMA).

edit: For large scale learning problems often Stochastic Gradient Descent (SGD) outperforms all other algorithms.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜