开发者

How to disallow bots from a single page or file

How to disallow bots from a single page and allow allow all other content to be crawled.

Its so important not to get wrong so I am asking here, cant find a definitive answer elsewhere.

开发者_JAVA百科

Is this correct?

    User-Agent:*
    Disallow: /dir/mypage.html
    Allow: /


The Disallow line is all that's needed. It will block access to anything that starts with "/dir/mypage.html".

The Allow line is superfluous. The default for robots.txt is Allow: /. In general, Allow is not required. It's there so that you can override access to something that would be disallowed. For example, say you want to disallow access to the "/images" directory, except for images in the "public" subdirectory. You would write:

Allow: /images/public
Disallow: /images

Note that order is important here. Crawlers are supposed to use a "first match" algorithm. If you wrote the 'Disallow` first, then a crawler would assume that access to "/images/public" was blocked.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜