开发者

Suggestions for avoiding duplicate products from scraping

I have written a very basic crawler which scrapes product information from websites to be put into a database.

It all works well except that some sites seems to have a distinct URL for multiple parts of the page. For example, a product url might be:

http://www.example.com/product?id=52

then, it might have another URL for different parts such as comments etc:

http://www.example.com/product?id=52&revpage=1

My crawler is seeing this as a distinct URL. Ive found some sites where one product has hundreds of distinct URLs. Ive already added the logic to ignore anything after a hash in the url to avoid anchor's, but I was wondering if anyone had any suggestions to avoid this problem? There could be a simple solution im not seeing.

At the moment its slowing down the crawl/scrape process where a site might have only 100 products its adding thousands of URLs.

I thought about ignoring the querystring, or even certain parts of the querystring but the product id is usually located 开发者_开发知识库in the query string so I couldn't figure out a way, without writing an exception for each site's URL structure


To elaborate on my comment...

You could include the following code

$producturl //is the url where you first found a product to scrape
$nexturl //is the next url you plan to crawl
if (strpos($nexturl, $producturl) === false) {
    crawl
}
loop back to the next url...

I am guessing you are crawling in sequence... meaning you find a page and crawl to all the links from that page... then you go back one level and repeat... If you are not crawling in sequence, you could store all the pages where you found a product and use that to check if the new page you plan to crawl starts with an url you have already crawled. If yes, you don't crawl the new page.

I hope this helps. Good luck!


You could use a database and set a unique constraint on the id or the name. So if your crawler try to add this data again an exception is raised. The simplest unique constraint would be a primary key.

Edit for url param solution:

If you have problems fetching the right parameters from your url, maybe a snipped from the facebook api could help.

protected function getCurrentUrl($noQuerys = false) {
  $protocol = isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on'
    ? 'https://'
    : 'http://';
  $currentUrl = $protocol . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'];
  $parts = parse_url($currentUrl); // http://de.php.net/manual/en/function.parse-url.php

  // drop known fb params
  $query = '';
  if (!empty($parts['query'])) {
    $params = array();
    parse_str($parts['query'], $params);
    foreach(self::$DROP_QUERY_PARAMS as $key) { // self::$DROP_QUERY_PARAMS is a list of params you dont want to have in your url
      unset($params[$key]);
    }
    if (!empty($params)) {
      $query = '?' . http_build_query($params, null, '&');
    }
  }

  // use port if non default
  $port =
    isset($parts['port']) &&
    (($protocol === 'http://' && $parts['port'] !== 80) ||
     ($protocol === 'https://' && $parts['port'] !== 443))
    ? ':' . $parts['port'] : '';


  // rebuild
  if ($noQuerys) {
      // return URL without parameters aka querys
      return $protocol . $parts['host'] . $port . $parts['path'];
  } else {
      // return full URL
      return $protocol . $parts['host'] . $port . $parts['path'] . $query;
  }
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜