开发者

HTTP request cost vs. page size cost?

I know it's a good practice to minimize the number of requests each page needs. For example, combining javascript files and using css sprites will greatly reduce the number of requests needed to render your page.

Another approach I've seen is to keep javascript embedded in the page itself, especially for javascript specific to that page and not really shared across other pages.

But my question is this:

At what point does my javascript grow too large that it becomes more efficient to pull the script into a separate file and allow the additional request for the separate js file?

In other words, how do I measure how much bytes equates to the cost of one request?

Since successive requests are cached, the only cost of calling that same js file is the cost of the request. Whereas keeping the js in the page will always incur the cos开发者_JS百科t of additional page size, but will not incur the cost of an additional request.

Of course, I know several factors go into this: speed of the client, bandwidth speed, latency. But there has to be a turning point to where it makes more sense to do one over the other.

Or, is bandwidth so cheap (speed, not money) these days that it requires many more bytes than it used to in order to exceed the cost of a request? It seems to be the trend that page size is become less of a factor, while the cost of a request has plateaued.

Thoughts?


If you just look at the numbers and assume an average round-trip time for a request of 100 ms and an average connection speed of 5 Mbps, you can arrive at a number which says that up to 62.5 KB can be added to a page before breaking it out to a separate file becomes worthwhile. Assuming that gzip compression is enabled on your server, then the real amount of JavaScript that you can add is even larger still.

But, this number ignores a number of important considerations. For instance, if you move your JavaScript to a separate file, the user's browser can cache it more effectively such that a user that hits your page 100 times might only download the JavaScript file once. If you don't do this, and assuming that your webpage has any dynamic content whatsoever, then the same user would have to download the entire script every single time.

Another issue to consider is the maintainability of the page. As a general rule, the more JavaScript you add, the more difficult it becomes to maintain your page and make changes and updates without introducing bugs and other problems. So even if you don't have quite 62.5 KB of JavaScript and even if you don't care about the caching side of things, you have to ask yourself whether or not having a separate JavaScript file would improve maintainability and if so, whether it's worth sacrificing that maintainability for a slightly faster page load.

So there really isn't an exact answer here, but as a general rule I think that if the JavaScript is stuff that is truly intrinsic to the page (onclick handlers, effects/animations, other things that interface directly with elements on the page) then it belongs with the page. But if you have a bunch of other code that your handlers, effects, and other things use more like a library/helper utility, then that code can be moved to a separate file. Favor maintainability of your code over both page size and load times. That's my recommendation, anyways.


This is a huge topic - you are indirectly asking about many different aspects of web performance, so there are a few tricks, some of which wevals mentions.

From my own experience, I think it comes down partially to modularization and making tradeoffs. So for instance, it makes sense to pack together javascript that's common across your entire site. If you serve the assets from a CDN and set correct HTTP headers (Cache-Control, Etag, Expires), you can get a big performance boost.

It's true that you will incur the cost of the browser making a request and receiving a 304 Not Modified from the server, but that response at least will be fast to go across the wire. However, you will (typically) still incur the cost of the server processing your request and deciding that the asset is unchanged. This is where web proxies like Squid, Varnish and CDNs in general shine.

On the topic of CDN, especially with respect to JavaScript, it makes sense to pull libraries like jQuery out of one of the public CDNs. For example, Google makes a lot of the most popular libraries available via its CDN which is almost always going to be faster than than you serving it from your own server.

I also agree with wevals that page size is still very important, particularly for international sites. There are many countries where you get charged by how much data you download and so if your site is enormous there's a real benefit to your visitors when you serve them small pages.

But, to really boil it down, I wouldn't worry too much about "byte cost of request" vs "total download size in bytes" - you'd have to be running a really high-traffic website to worry about that stuff. And it's usually not an issue anyway since, once you get to a certain level, you really can't sustain any amount of traffic without a CDN or other caching layer in front of you.

It's funny, but I've noticed that with a lot of performance issues, if you design your code in a sensible and modular way, you will tend to find the natural separations more easily. So, bundle together things that make sense and keep one-offs by themselves as you write.

Hope this helps.


With the correct headers set (far future headers see: 1), pulling the js into a separate file is almost always the best bet since all subsequent requests for the page will not make any request or connection at all for the js file.

The only only exception to this rule is for static websites where it's safe to use a far future header on the actual html page itself, so that it can be cached indefinitely.

As for what byte size equating to the cost of an http connection, this is hard to determine because of the variables that you mentioned as well as many others. HTTP resource requests can be cached at nodes along the way to a user, they can be paralleled in a lot of situations, and a single connection can be reused for multiple request (see: 2).

Page size is still extremely important on the web. Mobile browsers are becoming much more popular and along with that flaky connections through mobile providers. Try and keep file size small.

  1. http://developer.yahoo.com/performance/rules.html
  2. http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Persistent_connections

Addition: It's worth noting that major page size achievements can be achieved through minification and gzip which are super simple to enable through good build tools and web servers respectively.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜