开发者

Restricting JS links from search engine's crawling

I would like to prevent google from following links I have in JS. I didn't find how to do that in robots.txt Am I looking in the wrong place?开发者_JAVA技巧

Some more information: I'm seeing google is crawling those pages although the links only appear in JS. The reason I don't want him to crawl is that this content depends on external API's which I don't want to waste my rate limit with them on google crawlers and only per user demand


Direct from google ->

http://www.google.com/support/webmasters/bin/answer.py?answer=96569


Google probably won't find any links you have hidden in JS, but someone else could link to the same place.

It isn't links that matter though, it is URLs. Just specify the URLs you don't want search engines to visit in the robots.txt. The fact that you usually expose them to the browser via JS is irrelevant.

If you really want to limit access to the content, then just reducing discoverability probably isn't sufficient and you should put an authentication layer (e.g. password protection) in place.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜