开发者

Google found my backup web site. What can I do about It?

A few days ago we replaced our web site with an updated version. The original site's content was migrated to http://backup.example.com. Search engines do not know about the old site, and I do not want them to know.

While we were in the process of updating our site, Google crawled the old version.

Now when using Google to search for our web site, we get results for both the new and old sites (e.g., http://www.example.com and http://backup.example.com).

Here are my questions:

  1. Can I update the bac开发者_JAVA技巧kup site content with the new content? Then we can get rid all of old content. My concern is that Google will lower our page ranking due to duplicate content.
  2. If I prevent the old site from being accessed, how long will it take for the information to clear out of Google's search results?
  3. Can I use google disallow to block Google from the old web site.


You should probably put a robots.txt file in your backup site and tell robots not to crawl it at all. Google will obey the restrictions though not all crawlers will. You might want to check out the options available to you at Google's WebMaster Central. Ask Google and see if they will remove the errant links for you from their data.


you can always use robot.txt on backup.* site to disallow google to index it.

More info here: link text


Are the URL formats consistent enough between the backup and current site that you could redirect a given page on the backup site to its equivalent on the current one? If so you could do so, having the backup site send 301 Permanent Redirects to each of the equivalent pages on the site you actually want indexed. The redirecting pages should drop out of the index (after how much time, I do not know).

If not, definitely look into robots.txt as Zepplock mentioned. After setting the robots.txt you can expedite removal from Google's index with their Webmaster Tools


Also you can make a rule in your scripts to redirect with header 301 each page to new one


Robots.txt is a good suggestion but...Google doesn't always listen. Yea, that's right, they don't always listen.

So, disallow all spiders but....also put this in your header

<meta name="robots" content="noindex, nofollow, noarchive" />

It's better to be safe than sorry. Meta commands are like yelling at Google "I DONT WANT YOU TO DO THIS TO THIS PAGE". :)

Do both, save yourself some pain. :)


I suggest you to either add no index meta tag in all old page or just disallow by robots.txt. Best way to just blocked the by robots.txt. One thing more add the sitemap in new site and submit it in webmaster that improve your new website indexing.


Password protect your webpages or directories that you don't want web spiders to crawl/index by putting password protecting code in the .htaccess file (if present in your website's root directory on the server or create a new one and upload it). The web spiders will never know that password and hence won't be able to index the protected directories or web pages.


you can block any particular urls in webmasters check once...even you can block using robots.txt....remove sitemap for your old backup site and put noindex no follow tag for all of your old backup pages...i too handled this situation for one of my client............

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜