开发者

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.

I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)

Are there any examples of this sort of thing being done?

What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?

Notes:

  • I'm not interested in browser/OS detection.
  • At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
  • Things that might be interesting to measure:
    • Base JavaScript
    • DOM manipulation
    • DOM/CSS rendering
  • I'd like to do this in a way that affects the page's render-speed as little as possible.

BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.

开发者_如何学Go[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]


Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.

There are several reasons for this, the main ones being:

  1. Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience

  2. Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.

  3. Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.

The best you can do is to either

  1. Let your users decide what they want, and enable them to easily change that decision if they regret it

    or better yet

  2. Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.

Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.


A different approach, that does not need explicit benchmark, would be to progressively enable features.

You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.

Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.


You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.

If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?


Take a look at some of Google's (copyrighted!) benchmarks for V8:

  • http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js

  • http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js

I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.

Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.


You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie. This is only for HTML5 Supported browsers, of-course


Some Ideas:

  • Putting a time-limit on the tests seems like an obvious choice.
  • Storing test results in a cookie also seems obvious.
  • Poor test performance on a test could pause further scripts
    • and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
    • that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
    • while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
      .
  • On a sidenote - Slow network speeds could also probably be tested
    • by timing the download of external resources (like the pages own CSS or JavaScript files)
    • and comparing that result with the JavaScript benchmark results.
    • this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
      .
  • It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.


I came with a similar question and I solved it this way, in fact it helped me taking some decisions.

After rendering the page I do:

let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
    i++;
    now = Date.now();
}
console.log("I looped " + i + " times!!!");

After doing that I tested it on several browser with different OS and the i value gave me the following results:

Windows 10 - 8GB RAM:

  • 1,500,000 aprox on Chrome
  • 301,327 aprox on Internet Explorer
  • 141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)

MacOS 8GB RAM:

  • 3,000,000 aprox on Safari
  • 1,500,000 aprox on Firefox
  • 70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)

Windows 10 - 4GB RAM (This is an Old computer I have)

  • 500,000 aprox on Google Chrome

I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!

So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.

Good luck, everyone!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜