robots.txt allow root only, disallow everything else?
I can't seem to get this to work but it seems really basic.
I want the domain root to be crawled
http://www.example.com
But nothing else to be crawled and all subdirectories are d开发者_开发问答ynamic
http://www.example.com/*
I tried
User-agent: *
Allow: /
Disallow: /*/
but the Google webmaster test tool says all subdirectories are allowed.
Anyone have a solution for this? Thanks :)
According to the Backus-Naur Form (BNF) parsing definitions in Google's robots.txt documentation, the order of the Allow
and Disallow
directives doesn't matter. So changing the order really won't help you.
Instead, use the $
operator to indicate the closing of your path. $
means 'the end of the line' (i.e. don't match anything from this point on)
Test this robots.txt. I'm certain it should work for you (I've also verified in Google Search Console):
user-agent: *
Allow: /$
Disallow: /
This will allow http://www.example.com
and http://www.example.com/
to be crawled but everything else blocked.
note: that the Allow
directive satisfies your particular use case, but if you have index.html
or default.php
, these URLs will not be crawled.
side note: I'm only really familiar with Googlebot and bingbot behaviors. If there are any other engines you are targeting, they may or may not have specific rules on how the directives are listed out. So if you want to be "extra" sure, you can always swap the positions of the Allow
and Disallow
directive blocks, I just set them that way to debunk some of the comments.
When you look at the google robots.txt specifications, you can see that:
Google, Bing, Yahoo, and Ask support a limited form of "wildcards" for path values. These are:
- * designates 0 or more instances of any valid character
- $ designates the end of the URL
see https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt?hl=en#example-path-matches
Then as eywu said, the solution is
user-agent: *
Allow: /$
Disallow: /
精彩评论