开发者

Why doesn't Firefox redownload images already on a page?

i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ

There's a firefox behavior (and some other browsers i guess) i'd like to understand :

if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article)

I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ?

Plus, what cache is used for this ? (I guess it's the memory cache)

Is this behavi开发者_JAVA百科or the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM.

PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works)

thank you


The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them.

You can't change this behavior but this is odd because it's not the same for javascript files for example

Could someone explain or link to a document describing how firefox cache WORKS?


What you are dealing here is only partly a matter of caching.

The whole idea of embedding images by references them by URLs, is that you can reference to the same URL multiple times, and have the browser load it only once.

So basically if you write HTML (let's say index.html):

<img src="hello.jpg" />
<img src="hello.jpg" />

In this case, browser only invokes 2 HTTP requests: one for index.html and second one for hello.jpg.

What you need to do here is to "fake" that the images are loaded from different URLs. For this there are multiple methods.

Potentially the easiest solution is to add an extra prefix to the end of the URL. For instance if you want to load an image twice, you could add:

<img src="hello.jpg?1" />
<img src="hello.jpg?2" />

This will cause the browser the send one HTTP request to retrieve each of the images: one for hello.jpg?1 and second for hello.jpg?2. Some web servers / browsers, however, may still optimise this and send only one request to simply retrieve hello.jpg.

If this doesn't work, you can also try passing your image through a script like PHP, as suggested in another answer.


I believe caching only refers to downloading the page another time, so that when you load your page a second time, the images are not loaded again, when they have been cached. But as far as a single image (which is refered to by a single URL) is loaded, it's not affected by the "do not ever use cache"-flag during a single page-load.

Using PHP I'd go with a script, e.g. image.php that takes a hash (just random code) as a parameter but only delivers the same image over and over again, while the browser would interpret the image as a different one all the time.

So image.php?id=asdf and image.php?id=qwer would load the same image but look like different images to the browser and will be loaded separately.


Jaakko's solution should work fine for your needs providing your server side is happy to deal with the dummy query part.

This could be looked at as a bug, since you could argue that FF should really be making a new HTTP request for each image. Appropriate caching (client side) can still stop these requests from actually hitting the network where it is not necessary. The actual answer to whether or not this behavior is a bug or not should be in the HTML spec, but I don't know as I haven't looked into it.

If I was you I would change your approach a bit and use a work-around that you know will work in any browser worth caring about.

It sounds like you have a URI which outputs random/varying responses. A different approach might be to use a single URI but respond from there with 3xx redirects with a varying/random Location header to other available images.. I haven't tested it but this might avoid the problem and also allow you to reference a single URI. It would also allow you to cache the images your outputting, which you won't be able to do if you have one URI that is changing constantly.

GET /image/random
- 303 See Other
- Location: /image/xzkzkj3242hkjaha123

GET /image/random
- 303 See Other
- Location: /image/xlso847iuewrqb1231


It is not odd for me to cache images in single request and use cached version during page render. But I've noticed some other behaviour in my app, which is very undesirable. I've got completely ajaxed app and in following ajax flow:

  1. I have some http://url/image.jpg displayed on page
  2. I upload new image from UI, replacing image.jpg. URL remains the same - http://url/image.jpg - but image changes. All of it is done in ajax request and no plain GET is performed
  3. Firefox shows old image.jpg.

I think that for FF it is still the same single request, and there's no demand to refresh image (or even ask server if the image expired). After plain GET to the same page I've got new image displayed.

This IMHO can be considered as a bug.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜