Some way to guess the speed of client connection
I have to do dynamic decision about the contents weight to send to the client based on his/her connection speed.
That is: if the client is using a mobile device with 3G (or slower) connection, I send to him/her a lightweight content. If he/she is using WiFi or faster connection, I send to him/her the 开发者_StackOverflowcomplete content.
I tried to measure the time between reloads, sending to the client a header Location: myurl.com
(with some info about the client to identify it). This works on desktop browsers and some full mobile browsers (like Obigo), but it doesn't work on mini (proxy) browsers, like Opera Mini or UCWeb. These browsers return the time of connection between my server and the proxy server, not the mobile device.
The same occurs if I try to reload the page with <meta>
tag or Javascript document.location
.
Is there some way to discover or measure the speed of client connection, or whether he/she is using 3G or WiFi etc., which works on mini browsers (ie, that I can identify a slow connection thru a mini browser)?
This is a great question. I haven't run across any techniques for estimating client speed from a browser before. I do have an idea, though; I haven't put more than a couple minutes of thought into this, but hopefully it'll give you some ideas. Also, please forgive my verbosity:
First, there are two things to consider when dealing with client-server performance: throughput and latency. Generally, a mobile client is going to have low bandwidth (and therefore low throughput) compared to a desktop client. Additionally, the mobile client's connection may be more error prone and therefore have higher latency. However, in my limited experience, high latency does not mean low throughput. Conversely, low latency does not mean high throughput.
Thus, you may need to distinguish between latency and throughput. Suppose the client sends a timestamp (let's call it "A") with each HTTP request and the server simply echos it back. The client can then subtract this returned timestamp with its current time to estimate how long it took the request to make the round trip. This time includes almost everything, including network latency, and the time it took the server to fully receive your request.
Now, suppose the server sends back the timestamp "A" first in the response headers before sending the entire response body. Also assume you can incrementally read the server's response (e.g. nonblocking IO. There are a variety of ways to do this.) This means you can get your echoed timestamp before reading the server response. At this point, the client time "B" minus the request timestamp "A" is an approximation of your latency. Save this, along with the client time "B".
Once you've finished reading the response, the amount of data in the response body divided by the new client time "C" minus the previous client time "B" is an approximation of your throughput. For example, suppose C - B = 100ms, and you've read 100kb of data, then your throughput is 10kb/s.
Once again, mobile client connections are error prone and have a tendency to change in strength over time. Thus, you probably don't want test the throughput once. In fact, you might as well measure the throughput of every response, and keep a moving average of the client's throughput. This will reduce the likelihood that an unusually bad throughput on one request causes the client's quality to be downgraded, or vice versa.
Provided this method works, then all you need to do is decide what the policy is for deciding what content the client gets. For example, you could start in "low quality" mode and then if the client has good enough throughput for some period of time, then upgrade them to high quality content. Then, if their throughput goes back down, downgrade them back to low quality.
EDIT: clarified some things and added throughput example.
First thing: running over SSL (HTTPS) will avoid a lot of proxy nonsense. It'll also stop things like compression (which may make HTML, CSS, etc. load faster, but won't help for already-compressed data).
The time to load a page is latency + (bandwidth × size). Even if latency is unknown, measuring a small file and a large file can give you bandwidth:
Let L be latency, B be bandwidth, both unknown.
Let t₁ and t₂ be the measured download times.
In this example, the two sizes are 128k and 256k.
t₁ = L + B × 128 // sure would be nice if SO had Τεχ
t₂ = L + B × 256
t₂ - t₁ = (L + B × 256) - (L + B × 128)
= L + B × 256 - L - B × 128
= 256(B) - 128(B)
= 128(B)
So, you can see that if you divide the difference in times by the difference in page sizes, you get the bandwidth. Taking a single measurement may yield weird results due to latency and bandwidth not being constant. Repeating a few times (and throwing out outliers and absurd [e.g., negative] values) will converge on the true average bandwidth.
You can do these measurements in JavaScript easily, in the background, using any AJAX framework. Get current time, send off request, not clock time when response received. The requests themselves should be the same size, so that the overhead of sending the requests is just part of the latency. You'll probably want to use different hosts though, to defeat persistent connections. Either that or configure your server to refuse persistent connections, but only to your test files.
I suppose I'm actually abusing the word latency a little, it includes the time for all the constant overhead (e.g., sending the request). Well, its latency from wanting to receiving the first byte of payload.
I think you should not measure the speed or throughput.
A first guess could be the browser of the client. There are many different browsers for computers but they are generally not the same as browsers for mobile devices.
It is easy to check what browser your users are using.
Still you should provide an option to switch between light weight and full content because your guesses could be wrong.
Yes I know the response is DON'T.
The reason I'm reviving this ancient discussion is twofold:
Firstly, technology has changed, what wasn't easy 9 years ago might be now.
Secondly, I have a client with a website dating back over 20 years virtually unchanged. He declined the offer of a (very inexpensive) rewrite because it works and it's very fast. It's only a few pages, content is still relevant (he did ask me to delete the FAX number!) his view was "if it ain't broke don't fix it". It was written in pure HTML for a 640px wide screen in the days of dial-up modem connections. Some still use them The fixed screen width means it's usable on mobile/tablet, especially in landscape mode. It doesn't look too bad on big screen as there's a tiled background.
I ran Google pagespeed checker and it only scored 99% so I tweaked the htaccess file and it now gets 100%. We are mostly spoilt with fast broadband but some rural users get very disappointing speeds. Those guys can't be happy when they reach a multi-megabyte page. I thought maybe I could try an experiment on another site. If I could detect that the user was on a dial-up connection I could see what happens if I served those users a simple light-weight alternative.
Is this something you can discover on the client side? If you need to bypass proxies, then you can always discover the connection type on the client side and send that back. Another method would be to download a file on the client side via some scripted mechanism, record the bytes per second, and return that information to the server side.
Compare times server side of two requests from the same client.
How about a simple ajax query that requests a URL for the content? Just record the time server side of the first request with the ip of the client store it somewhere (file or database) and then compare it to the time of the request from the client-side javascript and deliver a URL after comparing the two times pick an arbitrary "fast" or "slow" time limit and deliver the URL for the content of the appropriate speed.
精彩评论