开发者

find some sentences

I'd like to find good way to find some (let it be two) sentences in some text. What will开发者_高级运维 be better - use regexp or split-method? Your ideas?

As requested by Jeremy Stein - there are some examples

Examples:

Input:

The first thing to do is to create the Comment model. We’ll create this in the normal way, but with one small difference. If we were just creating comments for an Article we’d have an integer field called article_id in the model to store the foreign key, but in this case we’re going to need something more abstract.

First two sentences:

The first thing to do is to create the Comment model. We’ll create this in the normal way, but with one small difference.

Input:

Mr. T is one mean dude. I'd hate to get in a fight with him.

First two sentences:

Mr. T is one mean dude. I'd hate to get in a fight with him.

Input:

The D.C. Sniper was executed was executed by lethal injection at a Virginia prison. Death was pronounced at 9:11 p.m. ET.

First two sentences:

The D.C. Sniper was executed was executed by lethal injection at a Virginia prison. Death was pronounced at 9:11 p.m. ET.

Input:

In her concluding remarks, the opposing attorney said that "...in this and so many other instances, two wrongs won’t make a right." The jury seemed to agree.

First two sentences:

In her concluding remarks, the opposing attorney said that "...in this and so many other instances, two wrongs won’t make a right." The jury seemed to agree.

Guys, as you can see - it's not so easy to determine two sentences from text. :(


As you've noticed, sentence tokenizing is a bit tricker than it first might seem. So you may as well take advantage of existing solutions. The Punkt sentence tokenizing algorithm is popular in NLP, and there is a good implementation in the Python Natural Language Toolkit which they describe the use of here. They also describe another approach here.

There's probably other implementations around, or you could also read the original paper describing the Punkt algorithm: Kiss, Tibor and Strunk, Jan (2006): Unsupervised Multilingual Sentence Boundary Detection. Computational Linguistics 32: 485-525.

You can also read another Stack Overflow question about sentence tokenizing here.


 your_string = "First sentence. Second sentence. Third sentence"
 sentences = your_string.split(".")
 => ["First sentence", " Second sentence", " Third sentence"]

No need to make simple code complicated.

Edit: Now that you've clarified that the real input is more complex that your initial example you should disregard this answer as it doesn't consider edge cases. An initial look at NLP should show you what you're getting into though.

Some of the edge cases that I've seen in the past to be a bit complicated are:

  • Dates: Some regions use dd.mm.yyyy
  • Quotes: While he was sighing — "Whatever, do it. Now. And by the way...". This was enough.
  • Units: He was going at 138 km. while driving on the freeway.

If you plan to parse these texts you should stay away from splits or regular expressions.


This will usually match sentences.

/\S(?:(?![.?!]+\s).)*[.?!]+(?=\s|$)/m

For your example of two sentences, take the first two matches.


irb(main):005:0> a = "The first sentence. The second sentence. And the third"
irb(main):006:0> a.split(".")[0...2]
=> ["The first sentence", " The second sentence"]
irb(main):007:0>

EDIT: here's how you handle the "This is a sentence ...... and another . And yet another ..." case :

irb(main):001:0> a = "This is the first sentence ....... And the second. Let's not forget the third"
=> "This is the first sentence ....... And the second. Let's not forget the thir
d"
irb(main):002:0> a.split(/\.+/)
=> ["This is the first sentence ", " And the second", " Let's not forget the thi    rd"]

And you can apply the same range operator ... to extract the first 2.


You will find tips and software links on the sentence boundary detection Wikipedia page.


If you know what sentences to search, Regex should do well searching for

((YOUR SENTENCE HERE)|(YOUR OTHER SENTENCE)){1}

Split would probably use up quite a lot of memory, as it also saves the things you don't need (the whole text that's not your sentence) as Regex only saves the sentence you searched (if it finds it, of course)


If you're segmenting a piece of text into sentences, then what you want to do is begin by determining which punction marks can separate sentences. In general, this is !, ? and . (but if all you care about is a . for the texts your processing, then just go with that).

Now since these can appear inside quotations, or as parts of abbreviations, what you want to do is find each occurrence of these punctuation marks and run some sort of machine learning classifier to determine whether that occurance starts a new sentence, or whether it does something else. This involves training data and a properly-constructed classifier. And it won't be 100% accurate, because there's probably no way to be 100% accurate.

I suggest looking in the literature for sentence segmentation techniques, and have a look at the various natural language processing toolkits that are out there. I haven't really found one for Ruby yet, but I happen to like OpenNLP (which is in Java).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜