开发者

one robots.txt to allow crawling only live website rest of them should be disallowed

I need guideline about using of robots.txt problem is as following.

I have one live website "www.faisal.com" or "faisal.com" an开发者_JAVA技巧d have two testing web servers as follows

"faisal.jupiter.com" and "faisal.dev.com"

I want one robots.txt to handle this all, i don't want crawlers to index pages from "faisal.jupiter.com" and "faisal.dev.com" only allowed to index pages from "www.faisal.com" or "faisal.com"

I want one robots.txt file which will be on all web servers and and should allow indexing only live website.


The disallow commands specifies only relative URL so I guess you cannot have the same robots.txt file for all.

Why not force HTTP authentification on the dev/test servers ?

That way the robots wont be able to crawl these servers.

Seems like a good idea if you want to allow specific people to check them but not everybody trying to find flaws in your not yet debugged new version ...

Especially now that you gave the adresses to everybody on the web.


Depending on who needs to access the dev and test servers -- and from where, you could use .htaccess or iptables to restrict at the IP address level.

Or, you could separate your robots.txt file from the web application itself, so that you can control the contents of it relative to the environment.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜