iterative machine learning algorithm [closed]
I need a machine learning algorithm which takes some training samples of form (x,y), and compute approximate function f:X->Y such that the error is minimum. error is defined as the difference b/n y and f(x).
But this learning algorithm must be a iterative one,and As the no.of iterations increases, the error must be decreased.
Any example would be helpful.
Neural network is one algorithm that have two features: 1. It can train iterativly on new data 2. It can train on same data iterativly, so error is decreased with each iteration. (back propagation learning)
- (stochastic) gradient boosting,
- AdaBoost,
...and any boosting algorithm generally, because boosting process improves classifier iteratively.
精彩评论