Extracting paragraph breaks from OCR text?
I'm trying to recreate the paragraphs and indentations from the output of OCR'd image text, like so:
Input (imagine that this is an image, not typed):
Output (with a few mistakes):
As you can see, no paragraph breaks or indentations are preserved.
Using Python, I tried an approach like this, but it doesn't work (fails too often):
Code:
def smart_format(text):
textList = text.split('\n')
temp = ''
averageLL = sum([len(line) for line in textList]) / len(textList)
for line in textList:
if (line.strip().endswith('!') or line.strip().endswith('.') or line.strip().endswith开发者_如何学运维('?')) and not line.strip().endswith('-'):
if averageLL - len(line) > 7:
temp += '{{ paragraph }}' + line + '\n'
else:
temp += line + '\n'
else:
temp += line + '\n'
return temp.replace(' -\n', '').replace('-\n', '').replace(' \n', '').replace('\n', ' ').replace('{{ paragraph }}', '\n\n ')
Does anyone have any suggestions as how I could recreate this layout? I'm working with old books, so I was hoping to re-typeset them with LaTeX, as it's quite simple to create a Python script to do that.
Thanks!
You can break up the image into multiple paragraphs by looking at the entropy of each 5-10 pixel horizontal slice. Although this is usually used to create "interesting" thumbnails from larger images or videos, you can also use it to identify the presence or absence of text. Here's how.
You divide the image into a bunch of horizontal strips, each 5-10 pixels tall. If a strip is not "busy" then you can assume that there is no text there. You can use this to isolate paragraphs. Now, you take each paragraph individually, and feed it into your OCR.
You could try to tell if the first word on a line could have easily fit on the previous line, indicating an intentional newline, instead of purely looking for short lines. Apart from that (and paying close attention to punctuation like you're doing in your example), I'd think the only other option is going back to the original images.
精彩评论