开发者

Do you get penalized by search engines when you let search engine crawlers pass through but add an additional step for users?

I am working currently for a project on which several parts of the website may be restricted due to an area the user resides. So that when a user accesses the page he gets redirected to a form he must complete in order the view the content.

Wanting search engines to index the content, I am creating exceptions for the search engine crawlers so that they can access easily the content.

I am cherry picking some search engines from this page, and my solution would be to check the IP Address of the crawler (which can be found on the page I linked) and based on that grant access.

Is this solution viable enough? I am asking this because I have read an article on the official Google Webmaster central blog which recommended performing DNS reverse lookups on the bot in order to match its authenticity.

I have to mention that this has no security implication.

TL;DR do I get penalized if I allow the search agent bot to go directly to the content while the user is redirected? Which is the better approach for this? (user agent, IP Address or rever开发者_Python百科se DNS lookup in relation to cost/benefit)


The answer is NO,

but some users will also view your page through google cache instead, bypassing your restrictions.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜