开发者

Faster alternative to BeautifulSoup for parsing visible text in an html file

I have a directory with 50,000 html files in it. For each of the files I want to find 'visible' texts (text data that would actually be visible to a person viewing the file in a browser). I've seen some fine solutions using libraries such as BeautifulSoup, but I want something faster.

A regex based sol开发者_Python百科ution I wrote wasn't much faster.

Would I speed things up by using some kind of file stream reader in python? What other faster alternatives are there?

(I'm happy to lose some accuracy by not using trusted parsers like BeautifulSoup if the solution is faster).

EDIT:

Fast enough = 5 minutes. Faster <= 1.3 hours (That's if BeautifulSoup takes an average of a 10th of a second to parse each file, which seems optimistic based on previous work I've been using it for)


It sounds like you are just trying to render every HTML file in a directory. Why write your own renderer (in Python or in any other language) when there are plenty of others out there?

Here is an example using w3m (you could equally use Lynx, links, ...):

find . -name '*.html' -exec w3m -dump {} \; 
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜