Web spiders and HTTP auth
I have an admin applicati开发者_C百科on that requires HTTP auth over SSL. I've included the path to the admin app in my site's robot.txt file.
I would rather not have the path to the admin app visible anywhere. Will the HTTP auth alone stop web spiders from indexing the page?
if you respond with a suitable 4xx (but not HTTP 410 or HTTP 404) HTTP status code, then yes, HTTP auth will stop google from indexing this page.
see: http://www.google.com/support/webmasters/bin/answer.py?answer=40132
additionally you could send the
X-Robots-Tag: noindex
HTTP header to make extra sure.
see: http://code.google.com/web/controlcrawlindex/docs/robots_meta_tag.html
oh yeah, including the URL in the robots.txt makes it even more likely that google indexes the page.... the robots.txt is a crawling directive, it basically says: do not fetch the content of that url. so google does not know that it's an HTTP auth, but as crawling is optional for indexing (yeah, really), the url might (and that is a very big might) shop up anyway in the google search results. i explained the google(bot) funnel in more detail here pages not indexed by Google
the right HTTP status header and the x-robot-tag are better suited to make sure an url does not show up in google (but both are useless if the robots.txt directive stays in place)
精彩评论