开发者

Can I prevent spiders from accessing a page with certain GET parameters?

We have a page that can optionally take an ID as a GET parameter. If an invalid ID is provided, the page throws an error and sends out a notification that someone's accessing the page incorrectly. Adding fuel to the fire is that IDs can be valid for a while, then expire.

We're having a problem where search engine bots are hitting the page with old, expired IDs. This means we get a bunch of "false positive" alerts every time we get spidered. I'd love to have some way to tell the bots to go ahead and crawl the page, but not use the GET parameter--just index the parameter-less page. Is this e开发者_如何学Pythonven remotely possible with a robots.txt file or something similar?


Note: I know the best way to solve this is change the page's behavior and that is, in fact, happening in a few weeks. I'm just looking for a solution for the meantime.


Inside the if statement where you check the _GET, put this HTML:

<meta name="robots" content="noindex, nofollow">
<meta name="googlebot" content="noindex, nofollow">


You can suggest that spiders ignore certain parts of your URL with the following in robots.txt:

User-agent: *
Disallow: *id=

Edit to clarify: This will cause spiders to ignore any URLs with id=blah in the GET string -- it doesn't magically "strip off" the id= part. But, this is effectively what you want since the normal URL with no "?id=" parameters returns the data you want indexed.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜