开发者

Python milk library: object weights issue

I'm trying to use one_vs_one composition of decision trees for multiclass classification. The problem is, when I pass different object weights to a classifier, the result stays the same.

Do I misunderstand something with weights, or do they just work incorrectly?

Thanks for your replies!

Here is my code:

class AdaLearner(object):
    def __init__(self, in_base_type, in_multi_type):
        self.base_type = in_base_type
        self.multi_type = in_multi_type

    def train(self, in_features, in_labels):
        model = AdaBoost(self.base_type, self.multi_type)
        model.learn(in_features, in_labels)

        return model

class AdaBoost(object):
    CLASSIFIERS_NUM = 100
    def __init__(self, in_base_type, in_multi_type):
        self.base_type = in_base_type
        self.multi_type = in_multi_type
        self.classifiers = []
        self.weights = []

    def learn(self, in_features, in_labels):
        labels_number = len(set(in_labels))
        self.weights = self.get_initial_weights(in_labels)

        for iteration in xrange(AdaBoost.CLASSIFIERS_NUM):
            classifier = self.multi_type(self.base_type())
            self.classifiers.append(classifier.train(in_features,
                                                     in_labels,
                                                     weights=self.weights))
            answers = []
            for obj in in_features:
                answers.append(self.classifiers[-1].apply(obj))
            err = self.compute_weighted_error(in_labels, answers)
            print err
            if abs(err - 0.) < 1e-6:
            break

            alpha = 0.5 * log((1 - err)/err)

            self.update_weights(in_labels, answers, alpha)
            self.normalize_weights()

    def apply(self, in_features):
        answers = {}
        for classifier in self.classifiers:
            answer = classifier.apply(in_features)
            if answer in answers:
                answers[answer] += 1
            else:
                answers[answer] = 1
        ranked_answers = sorted(answers.iteritems(),
                                key=lambda (k,v): (v,k),
                                reverse=True)
        return ranked_answers[0][0]

    def compute_weighted_error(self, in_labels, in_answers):
        error = 0.
        w_sum = sum(self.weights)
        for ind in xr开发者_StackOverflow中文版ange(len(in_labels)):
            error += (in_answers[ind] != in_labels[ind]) * self.weights[ind] / w_sum
        return error

    def update_weights(self, in_labels, in_answers, in_alpha):
        for ind in xrange(len(in_labels)):
            self.weights[ind] *= exp(in_alpha * (in_answers[ind] != in_labels[ind]))

    def normalize_weights(self):
        w_sum = sum(self.weights)
        for ind in xrange(len(self.weights)):
            self.weights[ind] /= w_sum

    def get_initial_weights(self, in_labels):
        weight = 1 / float(len(in_labels))
        result = []
        for i in xrange(len(in_labels)):
            result.append(weight)
        return result

As you can see, it is just a simple AdaBoost (I instantiated it with in_base_type = tree_learner, in_multi_type = one_against_one) and it worked the same way no matter how many base classifiers were engaged. It just acted as one multiclass decision tree. Then I've made a hack. I chose a random sample of objects on the each iteration with respect to their weights and trained classifiers with a random subset of objects without any weights. And that worked as it was supposed to.


The default tree criterion, namely information gain, does not take the weights into account. If you know of a formula which would do it, I'll implement it.

In the meanwhile, using neg_z1_loss will do it correctly. By the way, there was a slight bug in that implementation, so you will need to use the most current github master.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜