How to reduce the number of open file handles on linux while reading static resources from jar file in tomcat 6.0?
In our application we read static resources (i.e. JavaScript files) from a jar file located in WEB-INF/lib. It happens that the server quits working with too many open files
exception.
I found out (using lsof
), that the jar file is 开发者_如何学运维opened several times, and the number increases when I reload a page by the number of JavaScript files of the page.
I tried a couple the following things without positive result:
- URLConnection setDefaultUSeCache(false)
- URLConnection setUSeCache(false)
- context.xml cachingAllowed="false"
Is there something else I could try?
In Tomcat server, each incoming request uses a TCP socket and this socket consumes one file descriptor from the total available for the process. The file fescriptor (FD) is a handle created by a process when the file is opened. Each process can use a set limit of FDs and this is usually an OS level setting.
If you have many JS script files being loaded per page, then each JS request will consume one FD while it is being processed.
As the number of requests coming into the server increase, you can face a situation where there are many sockets open and thus you run out of FDs and you get the "Too Many Open Files" error.
Check the value of # cat /proc/sys/fs/file-max
to see how many FDs can be opened by your Tomcat on the server.
Ideally should be 65535. See here on how to increase this limit
http://tech-torch.blogspot.com/2009/07/linux-ubuntu-tomcat-too-many-open-files.html
Another suggestion is if you can reduce the number of JS calls, by combining the JS files into one.
A little light on detail, but it sounds like you're loading the resources with a Stream, so chances are your code is creating also creates the object anonymously in a method call (a very common practice in examples I've seen) and I know that causes file locking issues on Windows, so I'm sure it keeps descriptors hanging about on Unixes. Make sure you have a try block and you can call close() in the finally block when you're done. Static code analysis tools will usually catch this condition.
You don't mention the number of file handles we're talking about however. Usually, network connections will be your culprit, and usually you want to increase the limit.
The inputstream is closed in the finally block as you suggest. I looked also at URLConnection, but it seems that there is no close() or disconnect method.
It seems to me that the files were claused after a certain period of time. The open files are listed by lsof and if I reload the page open file handles go up. But after a couple of minutes they go down again. In case of high user traffic the open file handle were greater than the max of 2048 per process already. So the freeing of open file handles is to late then.
If wrote a tiny test program that opens a URLconnection to a file. It only calls getLastModified() and this opens a file handle already.
Afterwards I close the input stream and this effect dissappears.
So I come to the conclusion that I have to close the URLConnection.inputStream even if the stream is not read after the connection.
精彩评论