开发者

HTML page missing 4096 characters at clientside

In my IIS webserver logfiles every now and then I find a entry with status 404 (not found) I cannot explain:

2011-07-06 17:05:48 W3SVC1804222802 10.248.3.8
GET /appl/localscripts/ifacobjcatFrame - 80 - 123.123.123.123
HTTP/1.0 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+Trident/4.0;+
.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.30;+.NET+CLR+3.0.04506.648;+.NET+
CLR+3.5.21022;+MS-RTC+EA+2;+MS-RTC+LM+8) www.example.开发者_如何转开发com 404 0 2 5836 15

The weird part is the GET /appl/localscripts/ifacobjcatFrame which should indeed read:

GET /appl/localscripts/iface.js because my code has:

[snip 1100 chars]
<script type="text/javascript" src="../localscripts/iface.js"></script>
[snip almost 4096 chars]
<div id="frm_roomFrame">
[snip another 300 char]

The iface.js gets cut off and objcatFrame gets appended which comes a lot further in my HTML.

I counted and it seems that exactly 4096 characters get dropped.

The weird part is that this page work fine for 999 out of 1000 of my customers with all kinds of browser version. There is just one customer that has problems.

What can an Internet Explorer make drop 4096 seemingly at random in a HTML?

Note: the logfile line shows 5836 bytes towards the end so my server claims to be sending the correct number of bytes for the page.


extravagantly late answer to this question, but worth putting some notes in here so people who encounter this issue and find this page (like me) have another lead.

It looks as though this issue was a fairly obscure IE bug to do with the lookahead downloader that attempted to optimise downloading javascript (and possibly css, etc.) There's a good description here, and documentation for the fix here.

It basically comes down to the parser attempting to determine the charset from the first few kbs, and there being a link eligible for lookahead download spanning a kb border. The parser restarts and provides the lookahead downloader with incorrect data for its link, leading to this malformed query you see. According to that second link, the exact boundaries the link can span to cause failure, and the amount of apparently 'skipped' in the bad link can vary, but tend to be multiples of 4k.

Fortunately because the parser restarts it will still attempt to refetch the link correctly, so the user shouldn't see anything wrong, although of course you may have some security logic in place for users with too many bad requests, so there could be some side affects.

Hopefully this helps another reader out, because this bug had me pulling my hair out!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜