Optmizing disk writes for Java Logger
I am using Java.util.Logger to log various events of my project. I am using a file handler to create the log. I see that the rate at which events are written to the log ( in the disk ) is almost the pace at which events are happening. This seems to be good and bad at same time. Good as event updates are writte开发者_如何学Cn quickly, but I am concerned about the IO time. Sometimes there is a lot of data that needs to be written to the logs. So in those cases, my program would run slower because of this logging, which is not desirable.
It would be of great help, if somebody could suggest what I should do in this case. I do not care the rate at which events are logged, they just need to be there in the log file at the end of execution.
Thanks.
A performance loss of 5-10% is expected when running full debug logging. This seems to be acceptable for our customers.
If the code to generate some of the content to log out is expensive, consider using a simple test like this to avoid executing this code when debug is turned off:
if (log.isLoggable(Level.FINEST)) {
// code to generate the log entry
}
You can also create a java.util.logging.MemoryHandler
and push out to a file in a regular interval.
Jochen Bedersdorfer's answer is a good one and just4log is a system that will do it automatically for you - via post-processing. Therefore you won't have to ugly up your code with if statements around the log statements.
Pexus has recently released an open source performance logging package - PerfLog, that also includes an application logger based on java.util.logging.* API. It includes an option for asynchronous logging using Common J Work Manager that is availble in all J2EE container (1.4+) For more information see: http://www.pexus.com/perflog
Use a more-modern logging library such as log4j or slf4j which have support for asynchronous/buffered appenders.
In log4j, you can use AsyncAppender (which provides the buffering facility) and wire up a FileAppender to it:
The AsyncAppender will collect the events sent to it and then dispatch them to all the appenders that are attached to it. You can attach multiple appenders to an AsyncAppender.
The AsyncAppender uses a separate thread to serve the events in its buffer.
This way the events are written to the disk in a controlled manner, and your threads doing actual work are not tied up with disk I/O.
Or as a simpler option, consider if you really need to have the full output of the logs when running this program. It's often overkill to run an application in production with logging at the DEBUG level.
I would suggest you try out another logging solution, like log4j which is widely used (often in combination with commons-logging). It offers a performant approach to logging.
If you however desire even more control you can implement your own appender. Assuming you desire a file appender you can override the append routine of the FileAppender.
E.g.,
public class BatchingFileAppender extends FileAppender {
private List<LoggingEvent> batch = new LinkedList<LoggingEvent>;
public static final int BATCH_SIZE = 10;
@Override
protected void append(LoggingEvent event) {
batch.add(event);
// you can even optionally push ever 10'th or so messages to file
if (batch.size() == BATCH_SIZE) {
appendBatch();
}
}
@Override
protected void reset() {
appendBatch();
}
@Override
protected void closeWriter() {
appendBatch();
}
private void appendBatch() {
for(LoggingEvent event : batch) {
super.append(event);
}
batch.clear();
}
}
You should check out Logback. Same authors as log4j if I'm not mistaken.
Based on our previous work on log4j, logback internals have been re-written to perform about ten times faster on certain critical execution paths. Not only are logback components faster, they have a smaller memory footprint as well.
精彩评论