开发者

Recursive linkscraper c#

I'm struggling with this a whole day now and I can't seem to figure it out. I have a fucntion that gives me a list of all links on a specific url. That works fine. However I want to make this function recursive so that it searches for the links found with the first search and adds them to the list and continue so that it goes through all my pages on the website. How can I make this recursive?

My code:

class Program
{
public static List<LinkItem> urls;
private static List<LinkItem> newUrls = new List<LinkItem>();

static void Main(string[] args)
{
  WebClient w = new WebClient();
  int count = 0;
  urls = new List<LinkItem>();
  newUrls = new List<LinkItem>();
  urls.Add(new LinkItem{Href = "http://www.smartphoto.be", Text = ""});

  while (urls.Count > 0)
  {
    foreach (var url in urls)
    {
      if (RemoteFileExists(url.Href))
      {
        string s = w.DownloadString(url.Href);
        newUrls.AddRange(LinkFinder.Find(s));
      }
    }
    urls = newUrls.Select(x => new LinkItem{Href = x.Href, Text=""}).ToList();
    count += newUrls.Count;
    newUrls.Clear();
    ReturnLinks();
  }

  Console.WriteLine();
  Console.Write("Found: " + count + " links.");
  Console.ReadLine();
}

private static void ReturnLinks()
{
  foreach (LinkItem i in urls)
  {
    Console.WriteLine(i.Href);
    //ReturnLinks();
  }
}

private static bool RemoteFileExists(string url)
{
  try
  {
    HttpWebRequest request = WebRequest.C开发者_如何学JAVAreate(url) as HttpWebRequest;
    request.Method = "HEAD";
    //Getting the Web Response.
    HttpWebResponse response = request.GetResponse() as HttpWebResponse;
    //Returns TURE if the Status code == 200
    return (response.StatusCode == HttpStatusCode.OK);
  }
  catch
  {
    return false;
  }
}
}

The code behind LinkFinder.Find can be found here: http://www.dotnetperls.com/scraping-html

Anyone knows how I can either make that function recursive or can I make the ReturnLinks function recursive? I prefer to not touch the LinkFinder.Find method as this works perfect for one link, I just should be able to call it as many times as needed to expand my final url list.


I assume you want to load each link and find the link within, and continue until you run out of links?

Since it is likely that the recursion depth could get very large, i would avoid recursion, this should work i think.

WebClient w = new WebClient();
int count = 0;    
urls = new List<string>();
newUrls = new List<LinkItem>();
urls.Add("http://www.google.be"); 

while (urls.Count > 0)
{
    foreach(var url in urls)
    {
        string s = w.DownloadString(url);
        newUrls.AddRange(LinkFinder.Find(s));
    }
    urls = newUrls.Select(x=>x.Href).ToList();
    count += newUrls.Count;
    newUrls.Clear();
    ReturnLinks();
}

Console.WriteLine();
Console.Write("Found: " + count + " links.");
Console.ReadLine();


static void Main()
{
    WebClient w = new WebClient();

    List<ListItem> allUrls = FindAll(w.DownloadString("http://www.google.be"));
}

private static List<ListItem> FindAll(string address)
{
    List<ListItem> list = new List<ListItem>();

    foreach (url in LinkFinder.Find(address))
    {
        list.AddRange(FindAll(url.Address)));//or url.ToString() or what ever the string that represents the address
    }

    return list;
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜