Questions regarding Hidden Markov Models and Conditional Random Fields
I have been looking at Hidden Markov Models and Conditional Random Fields for the tasks of Named Entity Recognition, and I seem to be stuck on a fundamental concept, which is to say: Is the goal of the learning process to calculate argmax from the training data, and apply that argmax sequence to all instances of the test data?
Consider this Hidden Markov Model example: I have two states {1,0}, where 1 is an entity and 0 being any other word. For simplification's sake, I'm not concerning myself with entity categorization just yet, rather just entity detection.
My trai开发者_如何学Cning data is as follows:
Obama lives in Washington 1 0 0 1
The iPad is great 0 1 0 0
Steve Jobs is sick 1 1 0 0
Now following argmax rules, with:
P(State 1 to State 1) = 1/9
P(State 1 to State 0) = 1 - 1/9
P(State 0 to State 0) = 3/9
P(State 0 to State 1) = 1 - 3/9
And after working out V and U matrices, I find that:
The best label sequence extracted from the training data = 1 1 0 0
Now consider the test sentence:
The iPhone is great
Do I just apply the test sentence to 1 1 0 0, which would actually work, but if I have another test sentence like, "A spokesperson for Sony was fired", you can see that the sequence 1 1 0 0 would be completely useless for that sentence.
To summarize: is the purpose of training to extract ONE best label sequence and apply that to all test sentences? It would seem unlikely! What am I missing??
I strongly recommend that you read this lecture on HMM. Here's an excerpt from the HMM definition
A parameter q(s|u,v) for any trigram (u,v,s) such that s ∈ K ∪ {STOP}, and u,v ∈ K ∪ {*}. The value for q(s|u,v) can be interpreted as the probability of seeing the tag s immediately after the bigram of tags (u,v).
A parameter e(x|s) for any x ∈ V, s ∈ K. The value for e(x|s) can be interpreted as the probability of seeing observation x paired with state s.
You seem to be missing the e and you are not calculating the q correctly.
q(1|0,0) = count <0,0,1> / count <0,0>
The best sequence of tags is the most probable one, considering the products of the above parameters (sorry for not posting the formula).
For your example "A spokesperson for Sony was fired" all the sequences are:
* * 0 0 0 0 0 0 STOP
* * 0 0 0 0 0 1 STOP
...
* * 1 1 1 1 1 1 STOP
and you should calculate e(A|0) , e(spokesperson|0), q(0|*,*), q(0|*,0), etc. Then multiply them accordingly and get the sequence with highest probability.
Since this is a time consuming task and grows exponentially for longer sequences, the viterbi algorithm is used (also described in the lecture)
精彩评论