开发者

How read a portion of a file (parsing), interrupt on a "<" char in python?

I Need to parse a html file in python, and stock the content in a list. Example: ['<html>', '<head>', '<meta name="robots" content="noindex">']

Here is what i have for now on the buggish function:

def getTexte(fp,compte=0): #returns the text before a html tag
    txt=""
    pos=fp.tell() #stock the curr position for later use开发者_JAVA技巧
    tmppos=fp.tell() #same here
    for car in fp.read():
        if car=="<": #if we encounter the start of a html tag
            fp.seek(tmppos) #we get back to juste before the html tag
            break # and we leave this damn for
        txt=txt+car #we concatenate each car in the string
        tmppos=fp.tell() #and stock the pos for later use
    if compte==0:
        fp.seek(pos)
    if txt!="":
        return txt

Now here is a sample output I get:

['<p>', 'Blablabla', 'lablabla', 'ablabla', 'blabla', 'labla', 'abla', 'bla', 'la', 'a', '</p>']

And i can't understand why. Maybe too tired.


As others have said in their comments you really don't want to attempt to write an HTML parser by iterating over input as a series of characters. Your code fragment's references to tell() and read() methods suggest that you're thinking of this in terms of walking through an open file rather than even thinking at the higher level (a document read into a buffer as a string).

There are a number of tools already written, freely available, widely tested, well maintained and broadly acclaimed which are designed specifically to perform this sort of task for you. The most popular of these, by far, is one called "BeautifulSoup" which is famed for its robustness and tolerance of the sort of HTML that's found "in the real world." The goal of BeautifulSoup is, roughly speaking, to parse any HTML that your browser would reasonably display. Thus it can handle a wide variety of extremely common errors in HTML --- improperly nested tags, containers with missing closing tags, non-standard "tags" and tags with nonstandard and ill-formed attributes and attribute=value pairs and so on.

Here's an extremely simple example of some Python code using BeautifulSoup:

#!/bin/env python

import urllib2
from BeautifulSoup import BeautifulSoup 

def get_page(url):
    fetcher = urllib2.urlopen(url)
    results = fetcher.read()
    fetcher.close()
    return results

def find_tags(data):
    results = list()
    parser = BeautifulSoup(data)
    results.extend(parser.findAll())
    return results

if __name__ == '__main__':
    import sys, time

    for url in sys.argv[1:]:
        html=get_page(url)
        for n, each in enumerate([str(x) for x in find_tags(html)]):
            print n, each, '\n\n\n'

... as you can see the references to BeautifulSoup only account for a few of the lines here. The rest is fetching the HTML and printing the results.

These results, incidentally, aren't quite what you're looking for in that they represent a depth-wise traversal of each HTML container from the outermost down through the and its components, and thence through the and its components, etc. In your code you'll probably want to traverse this tree and determine when you're at a leaf and then capture text/contents or tags/code in some way as you do so. You'll want to read the BeautifulSoup: Documentation for details that will match your needs more precisely.


If all you need to do is use the output of parsed html, then take a look at Beautiful Soup. A tonne of work has gone into ensuring that HTML (and XML) is parsed correctly, even if you feed it non-valid markup.

Are you required to build a parser? Or are you just required to use the output of a parser? This will determine the kind of help you get from StackOverflow. A lot of the time, by making your intentions (requirements) known alongside your proposed solution and problem, people will point out alternate solutions that may be better suited to your requirements. Food for thought.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜