开发者

Python lxml/beautiful soup to find all links on a web page

I am writing a script to read a web page, and build a database of links that matches a certain criteria. Right now I am stuck with lxml and understanding how to grab all the <a href>'s from the html...

开发者_C百科result = self._openurl(self.mainurl)
content = result.read()
html = lxml.html.fromstring(content)
print lxml.html.find_rel_links(html,'href')


Use XPath. Something like (can't test from here):

urls = html.xpath('//a/@href')


With iterlinks, lxml provides an excellent function for this task.

This yields (element, attribute, link, pos) for every link [...] in an action, archive, background, cite, classid, codebase, data, href, longdesc, profile, src, usemap, dynsrc, or lowsrc attribute.


I want to provide an alternative lxml-based solution.

The solution uses the function provided in lxml.cssselect

    import urllib
    import lxml.html
    from lxml.cssselect import CSSSelector
    connection = urllib.urlopen('http://www.yourTargetURL/')
    dom =  lxml.html.fromstring(connection.read())
    selAnchor = CSSSelector('a')
    foundElements = selAnchor(dom)
    print [e.get('href') for e in foundElements]


You can use this method:

from urllib.parse import urljoin, urlparse
from lxml import html as lh
class Crawler:
     def __init__(self, start_url):
         self.start_url = start_url
         self.base_url = f'{urlparse(self.start_url).scheme}://{urlparse(self.start_url).netloc}'
         self.visited_urls = set()

     def fetch_urls(self, html):
         urls = []
         dom = lh.fromstring(html)
         for href in dom.xpath('//a/@href'):
              url = urljoin(self.base_url, href)
              if url not in self.visited_urls and url.startswith(self.base_url):
                   urls.append(url)
         return urls
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜