开发者

decision tree on information gain

if i got two decision trees on th开发者_JAVA技巧e same amount of nodes, which is considered better? tree 1: (F is false an T is True)

decision tree on information gain

meaning the first one is wider but the second one deeper.


I know this question is quite old but in case you are still interested in the answer, generally, a shorter, wider tree would be "better." Consider the fact that it will take an additional decision to reach the inner decision node "C".

What you really have to look at is the entropy and gain at each inner decision node. Entropy is the amount of uncertainty or randomness with a particular variable. For example, consider a classifier with two classes, YES and NO (true or false in your case). If a particular variable or attribute, say x has three training examples of class YES and three training examples of class NO (for a total of six), the entropy would be 1. This is because there is an equal number of both classes for this variable and is the most "mixed up" you can get. Likewise, if x had all six training examples of a particular class, say YES, then entropy would be 0 because this particular variable would be pure, thus making it a leaf node in our decision tree.

Entropy may be calculated in the following way:

decision tree on information gain

Now consider gain. Note that each level of the decision tree, we choose the attribute that presents the best gain for that node. The gain is simply the expected reduction in the entropy achieved by learning the state of the random variable x. Gain is also known as Kullback-Leibler divergence. Gain can be calculated in the following way:

decision tree on information gain

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜