开发者

lexers / parsers for (un) structured text documents [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.

We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered wit开发者_如何转开发h facts and citations.

Closed 5 years ago.

Improve this question

There are lots of parsers and lexers for scripts (i.e. structured computer languages). But I'm looking for one which can break a (almost) non-structured text document into larger sections e.g. chapters, paragraphs, etc.

It's relatively easy for a person to identify them: where the Table of Contents, acknowledgements, or where the main body starts and it is possible to build rule based systems to identify some of these (such as paragraphs).

I don't expect it to be perfect, but does any one know of such a broad 'block based' lexer / parser? Or could you point me in the direction of literature which may help?


Many lightweight markup languages like markdown (which incidentally SO uses), reStructured text and (arguably) POD are similar to what you're talking about. They have minimal syntax and break input down into parseable syntactic pieces. You might be able to get some information by reading about their implementations.


  1. Define the annotation standard, which indicates how you would like to break things up.
  2. Go on to Amazon Mechanical Turk and ask people to label 10K documents using your annotation standard.
  3. Train a CRF (which is like an HMM, but better) on this training data.

If you actually want to go this route, I can elaborate on the details. But this will be a lot of work.


Most of the lex/yacc kind of programs work with a well defined grammar. if you can define your grammar in terms of a BNF like format (which most of the parsers accept similar syntax) then you can use any of them. That may be stating the obvious. However you can still be a little fuzzy around the 'blocks' (tokens) of text which would be part of your grammar. After all you define the rules for your tokens.

I have used Parse-RecDescent Perl module in the past with varying levels of success for similar projects.

Sorry, it may not be a good answer but more sharing my experiences on similar projects.


try: pygments, geshi, or prettify

They can handle just about anything you throw at them and are very forgiving of errors in your grammar as well as your documents.

References:
gitorius uses prettify,
github uses pygments,
rosettacode uses geshi,

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜