开发者

Data mining algorithms comparison [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 7 years ago.

Improve this question

Are there any data mining algorithms comparisons? Comparisons in terms of performance, accuracy and the required amount of data for generating the robust model. It seems that ensemble learning algorithms like bagging and boosting are considered to be the most accurate at this moment. I don't have any specific problem to solve. It's just a开发者_如何学JAVA theoretical question.


You should search the web for survey papers on Data Mining.

Here is one: Top Ten Algorithms in Data Mining, which gives a ranking instead of a side by side. (It might have that though, I haven't gone through the paper).


It is very difficult to compare machine learning algorithms in general in terms of robustness and accuracy. However one can study some of their pros and cons. I consider below a few of the most well known machine learning algorithms (this is in no way a complete account of things, just my opinion):

Decision trees: most prominently the C4.5 algorithm. They have the advantage of producing an easily interpreted model. They are however susceptible to overfitting. Many variants exist.

Bayesian Networks have strong statistical roots. They are especially useful in domains where inferencing is done over incomplete data.

Artificial Neural Networks are widely used and powerful technique. In theory they are able to approximate any arbitrary function. However they require tuning a large number of parameters (network structure, number of nodes, activation functions, ..). Also they have the disadvantage of working as a black box (difficult to interpret model)

Support vector machine are perhaps considered one of the most powerful techniques. Using the famous kernel trick, in theory one can always achieve 100% separability. Unlike ANN they seek to optimize a uniquely solvable problem (no local minimas). They can however be computationally intensive and difficult to apply to large datasets. SVMs are definitely an open research area.

Then there is a class of meta-learning algorithms like the ensemble learning techniques such as bagging, boosting, stacking, etc... They are not in themselves complete but rather used as ways of improving and combining other algorithms.

I should mention in the end that no algorithm is better than another in general, and that the decision of which to choose heavily depends on the domain we are in, and the data and how it is preprocessed among many other factors..


ROC curves have been proved useful for the evaluation of machine learning techniques and particularly in comparing and evaluating different classification algorithms. You may find helpful this introduction to ROC analysis.


According to your question, you seems to be interested by classification algorithms. First, I would like to let you know that data mining is not only limited to classification. There are several other data mining tasks like mining frequent patterns, clustering, etc.

To answer your question, the performance depends on the algorithm but also on the dataset. For some dataset, some algorithms may give better accuracy than for some other datasets. Besides the classical classification algorithms described in most data mining books (C4.5, etc.), there is a lot of research papers published on these topics. If you want to know what algorithms generally perform better now, I would suggest to read the research papers. Research papers typically offers some performance comparison with previous algorithms. But like I said, performance may depends on your data. So you might have to try the algorithms to find out!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜