开发者

Are there any downsides to prefetching webpages using javascript?

I'm currently experimenting with prefetching pages to increase perceived performance of our website, using the code below (req. jQuery).

Only 0.5% of our visitors use dial-up, I'm excluding querystrings (good old times), externals links (http) and pdfs (our large files are in this format). On a production site, what other possible negative scenario's apply when prefetching that I haven't considered?

<script type="text/javascript">
$(document).ready(function() {
$("a").each(
function(){
    $(this).bind ("mouseover", function() {
        var href=$(this).attr('href');
        if (
            (href.indexOf('?') == -1)&&
            (href.indexOf('http:') ==-1)&&
            ($(this).hasClass('nopreload') == false)&&
            (href.indexOf('.pdf') == -1)
        ) {
            $.ajax({ url:开发者_StackOverflow社区href, cache:true, dataType:"text" });
        }
    });
    $(this).bind ("mousedown", function(btn) {
        if (btn.which==1) {
            var href=$(this).attr('href');
            if ($(this).hasClass('nopreload') == false) {
                window.location.href = href;
                return false;
            }
        }
    });
});
});
</script>

For certain links, when hovered over it will preload the page and on mousedown will navigate (rather then after the button is released).


A right click will trigger a mouse down event too - so you might want to check the events data.

I guess that the speed gain for html source of 20-30kb is rather low. Your function does not preload any image, css or js files but only the pure html code.


On badly coded sites (which there are MANY), clicking a link might have an effect. For example, on many sites a delete button is in fact a link which when clicked, deletes a record. You have to be absolutely sure your site has zero standard vanilla links that when sent a GET request to, have harmful side effects.

You also have to be sure that it isn't possible for users to include similar links. I could also imaging links to external polling services, which e.g. easily allow one to make a poll in some forum by adding clickable links, which update the poll and redirect to REFERER.

Less harmfully, sites may do smart tricks to keep track of activity, thus keeping track of each prefetched page. This may impact your site's statistics or logging, and potentially give you a distorted view of your users' activity.

That said, I like the idea! :-)


In my opinion, there isn't much point. If you use javascript and CSS sitewide anyway, the only thing extra you're going to be caching is images, and this can be done using javascript on a page-per-page basis. It seems to me that you're giving your server far too much work to do needlessly.

When you mouseover a link it will load and cache the page, correct. But when you navigate to that page the browser will once again request the page and the server will still have to build it and send it, which makes the caching process pointless.


Have you considered Prefetching as defined in HTML 5? I'm not quite sure how many browsers support it currently, but it's worth checking out IMHO. I thought FF supported it, but I couldn't get it to work, but Chrome does seem to do it.

Normally you can set in the head:

<link rel="prefetch" href="abc.html">

A quick test to do it dynamically "onmouseover" in Chrome also works:

<a onmouseover="var l=document.createElement('link'); l.rel='prefetch'; l.href=this.href; document.getElementsByTagName('head')[0].appendChild(l);" href="abc.html">abc</a>


Here's a downside: Chrome is trying to prefetch my event tracking pixels, as I write them to the page, and it results in double impressions.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜