开发者

Will content loaded by AJAX affect SEO/Search Engines?

i wonder if content loaded dynamically by AJA开发者_如何学CX affect SEO/ability for search engines to index the page?

i am thinking of doing a constantly loading page, something like the Tumblr dashboard where content is automatically loaded as the user scrolls down.


A year later...

A while back Google came out with specifications for how to create XHR content that may be indexed by search engines. It involves pairing content in your asynchronous requests with synchronous requests that can be followed by the crawler.

http://code.google.com/web/ajaxcrawling/

No idea whether other search giants support this spec, or whether Google even does. If anybody has any knowledge about the practicality of this method this I'd love to hear about their experience..

Edit: As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:

In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.

H/T: @mark-bembnowski


Five years later...

Latest update on SEO AJAX:

As of October 14, 2015

Google now is able to crawl and parse AJAX loaded content. SPA or other AJAX rendered page no longer needed to prepare two versions of websites for SEO.


Short answer: It depends.

Here's why - say you have some content that you want to have indexed - in that case loading it with ajax will ensure that it won't. Therefore that content should be loaded normally.

On the other hand, say you have some content that you wish to index, but for one reason or another you do not wish to show it (I know this is not recommended and is not very nice to the end user anyway, but there are valid use cases), you can load this content normally, and then hide or even replace it using javascript.

As for your case where you have "constantly loading" content - you can make sure it's indexed by providing links to the search engines/non-js enabled user agents. For example you can have some twitter-like content and at the end of it a more button that links to content starting from the last item that you displayed. You can hide the button using javascript so that normal users never know it's there, but the crawlers will index that content (by clicking the link) anyway.


If you have some content loaded by an Ajax request, then, it is only loaded by user-agents that run Javascript code.

Search-engine robots generally don't support Javascript (or not well at all).

So chances are that your content that's loaded by an Ajax request will not be seen by search engines crawlers -- which means it will not be indexed ; which is not quite good for your website.


Crawlers don't run JavaScript, so no, your content will not be visible to them. You must provide an alternative method of reaching that content if you want it to be indexed.

You should stick to what's called "graceful degradation" and "progressive enhancement". Basically this means that your website should function and content should be reachable when you start to disable some technologies.

Build your website with a classic navigation, and then "ajaxify" it. This way, not only is it indexed correctly by search engines, it's also friendly for users that browse it with mobile devices / with JS disabled / etc.


Two years later, Bing and Yahoo search engines also now support Google's Ajax Crawling Standard. Information over the standard can be found here: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started.


The accepted answer on this question is no longer accurate. Since this post still shows in search results, I'll summarize the latest facts:

Sometime in 2009, Google released their AJAX crawling proposal. Other search engines added support for this scheme shortly thereafter. As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:

In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜