开发者

iterative machine learning algorithm [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. 开发者_运维知识库 Closed 11 years ago.

I need a machine learning algorithm which takes some training samples of form (x,y), and compute approximate function f:X->Y such that the error is minimum. error is defined as the difference b/n y and f(x).

But this learning algorithm must be a iterative one,and As the no.of iterations increases, the error must be decreased.

Any example would be helpful.


Neural network is one algorithm that have two features: 1. It can train iterativly on new data 2. It can train on same data iterativly, so error is decreased with each iteration. (back propagation learning)


  1. (stochastic) gradient boosting,
  2. AdaBoost,

...and any boosting algorithm generally, because boosting process improves classifier iteratively.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜