What is the working of web Crawler? [closed]
Will web crawler crawl the web and create a database of the web or it will just create a searchable index of web? If suppose it creates an index, who will exactly will gather the data of web pages and store it in database?
Though the question is slightly vague Let me puts some words to clarify.
Crawler makes http request of a URL and analyse the information of that web page. Say for example it makes a http req. http://www.example.com it retrieves the content of the page.
Once it gets the content of the page it analyse it. Now comes the importance of H1, H2 , P tages base on these tags it gets a clue of what the web page is all about.
Identifies the important/prominent words called keywords and summarise the page content and puts it in its index
Also it gets hyperlinks to other websites from that page that will be used in its next jump to those website and it proceeds further. It is a never ending story.
So whenever a keyword is being asked it looks from the keyword database and it shows in the result.
Sometimes the crawler itself dumps the copy of the web pages in a special database called cache database so that it can be used as alternate copy of the original data.
精彩评论