开发者

What is the working of web Crawler? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. 开发者_如何学Python Closed 12 years ago.

Will web crawler crawl the web and create a database of the web or it will just create a searchable index of web? If suppose it creates an index, who will exactly will gather the data of web pages and store it in database?


Though the question is slightly vague Let me puts some words to clarify.

  1. Crawler makes http request of a URL and analyse the information of that web page. Say for example it makes a http req. http://www.example.com it retrieves the content of the page.

  2. Once it gets the content of the page it analyse it. Now comes the importance of H1, H2 , P tages base on these tags it gets a clue of what the web page is all about.

  3. Identifies the important/prominent words called keywords and summarise the page content and puts it in its index

  4. Also it gets hyperlinks to other websites from that page that will be used in its next jump to those website and it proceeds further. It is a never ending story.

  5. So whenever a keyword is being asked it looks from the keyword database and it shows in the result.

  6. Sometimes the crawler itself dumps the copy of the web pages in a special database called cache database so that it can be used as alternate copy of the original data.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜