开发者

What does discriminative reranking do in NLP tasks?

Recently,i have read about the "discriminative rer开发者_运维问答anking for natural language processing" by Collins. I'm confused what does the reranking actually do? Add more global features to the rerank model? or something else?


If you mean this paper, then what is done is the following:

  1. train a parser using a generative model, i.e. one where you compute P(term | tree) and use Bayes' rule to reverse that and get P(tree | term),
  2. apply that to get an initial k-best ranking of trees from the model,
  3. train a second model on features of the desired trees,
  4. apply that to re-rank the output from 2.

The reason why the second model is useful is that in generative models (such as naïve Bayes, HMMs, PCFGs), it can be hard to add features other than word identity, because the model would try to predict the probability of the exact feature vector instead of the separate features, which might not have occurred in the training data and will have P(vector|tree) = 0 and therefore P(tree|vector) = 0 (+ smoothing, but the problem remains). This is the eternal NLP problem of data sparsity: you can't build a training corpus that contains every single utterance that you'll want to handle.

Discriminative models such as MaxEnt are much better at handling feature vectors, but take longer to fit and can be more complicated to handle (although CRFs and neural nets have been used to construct parsers as discriminative models). Collins et al. try to find a middle ground between the fully generative and fully discriminative approaches.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜