开发者

Can HTTP headers be too big for browsers?

I am building a开发者_开发技巧n AJAX application that uses both HTTP Content and HTTP Header to send and receive data. Is there a point where the data received from the HTTP Header won't be read by the browser because it is too big ? If yes, what is the limit and is it the same behaviour in all the browser ?

I know that theoretically there is no limit to the size of HTTP headers, but in practice what is the point that past that, I could have problem under certain platform, browsers or with certain software installed on the client computer or machine. I am more looking into a guide-line for safe practice of using HTTP headers. In other word, up to what extend can HTTP headers be used for transmitting additional data without having potential problem coming into the line ?


Thanks, for all the input about this question, it was very appreciated and interesting. Thomas answer got the bounty, but Jon Hanna's answer brought up a very good point about the proxy.


Short answers:

Same behaviour: No

Lowest limit found in popular browsers:

  • 10KB per header
  • 256 KB for all headers in one response.

Test results from MacBook running Mac OS X 10.6.4:

Biggest response successfully loaded, all data in one header:

  • Opera 10: 150MB
  • Safari 5: 20MB
  • IE 6 via Wine: 10MB
  • Chrome 5: 250KB
  • Firefox 3.6: 10KB

Note Those outrageous big headers in Opera, Safari and IE took minutes to load.

Note to Chrome: Actual limit seems to be 256KB for the whole HTTP header. Error message appears: "Error 325 (net::ERR_RESPONSE_HEADERS_TOO_BIG): Unknown error."

Note to Firefox: When sending data through multiple headers 100MB worked fine, just split up over 10'000 headers.

My Conclusion: If you want to support all popular browsers 10KB per header seems to be the limit and 256KB for all headers together.

My PHP Code used to generate those responses:

<?php

ini_set('memory_limit', '1024M');
set_time_limit(90);
$header = "";

$bytes = 256000;

for($i=0;$i<$bytes;$i++) {
    $header .= "1";
}

header("MyData: ".$header);
/* Firfox multiple headers
for($i=1;$i<1000;$i++) {
    header("MyData".$i.": ".$header);
}*/

echo "Length of header: ".($bytes / 1024).' kilobytes';

?>


In practice, while there are rules prohibitting proxies from not passing certain headers (indeed, quite clear rules on which can be modified and even on how to inform a proxy on whether it can modify a new header added by a later standard), this only applies to "transparent" proxies, and not all proxies are transparent. In particular, some wipe headers they don't understand as a deliberate security practice.

Also, in practice some do misbehave (though things are much better than they were).

So, beyond the obvious core headers, the amount of header information you can depend on being passed from server to client is zero.

This is just one of the reasons why you should never depend on headers being used well (e.g., be prepared for the client to repeat a request for something it should have cached, or for the server to send the whole entity when you request a range), barring the obvious case of authentication headers (under the fail-to-secure principle).


Two things.

First of all, why not just run a test that gives the browser progressively larger and larger headers and wait till it hits a number that doesn't work? Just run it once in each browser. That's the most surefire way to figure this out. Even if it's not entirely comprehensive, you at least have some practical numbers to go off of, and those numbers will likely cover a huge majority of your users.

Second, I agree with everyone saying that this is a bad idea. It should not be hard to find a different solution if you are really that concerned about hitting the limit. Even if you do test on every browser, there are still firewalls, etc to worry about, and there is absolutely no way you will be able to test every combination (and I'm almost positive that no one else has done this before you). You will not be able to get a hard limit for every case.

Though in theory, this should all work out fine, there might later be that one edge case that bites you in the butt if you decide to do this.

TL;DR: This is a bad idea. Save yourself the trouble and find a real solution instead of a workaround.


Edit: Since you mention that the requests can come from several types of sources, why not just specify the source in the request header and have the data contained entirely in the body? Have some kind of Source or ClientType field in the header that specifies where the request is coming from. If it's coming from a browser, include the HTML in the body; if it's coming from a PHP application, put some PHP-specific stuff in there; etc etc. If the field is empty, don't add any extra data at all.


The RFC for HTTP/1.1 clearly does not limit the length of the headers or the body.

According to this page modern browsers (Firefox, Safari, Opera), with the exception of IE can handle very long URIs: https://web.archive.org/web/20191019132547/https://boutell.com/newfaq/misc/urllength.html. I know it is different from receiving headers, but at least shows that they can create and send huge HTTP requests (possibly unlimited length).

If there's any limit in the browsers it would be something like the size of the available memory or limit of a variable type, etc.


Theoretically, there's no limit to the amount of data that can be sent in the browser. It's almost like saying there's a limit to the amount of content that can be in the body of a web page.

If possible, try to transmit the data through the body of the document. To be on the safe side, consider splitting the data up, so that there are multiple passes for loading.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜