FileChannel.write on Linux produces lots of garbage, but not on Mac
I am trying to limit the amount of garbage produced by my log library, so I coded a test to show me how much memory is FileChannel.write creating. The code below allocates ZERO memory on my Mac, but creates tons of garbage on my Linux box (Ubuntu 10.04.1 LTS), triggering the GC. FileChannels are supposed to be fast and lightweight. Is there a JRE version where this was made better on Linux?
File file = new File("fileChannelTest.log");
FileOutputStream fos = new FileOutputStream(file);
FileChannel fileChannel = fos.getChannel();
ByteBuffer bb = ByteBuffer.wrap("This is a log line to test!\n".getBytes());
bb.mark();
long freeMemory = Runtime.getRuntime().freeMemory();
for (int i = 0; i < 1000000; i++) {
bb.reset();
fileChannel.write(bb);
}
System.out.println("Memory allocated: " + (freeMemory - Runtime.getRuntime().freeMemory()));
The details of my JRE are below:
java version "1.6.0_19"
Java(TM) SE Runtime Environment (build 1.6.0_19-b04)
Java HotSpot(TM) 64-Bit Server VM (build 16.2-b04, mixed mode)
Updated to:
java version "1.6.0_27"
Java(TM) SE Runtime Environment (build 1.6.0_2开发者_运维百科7-b07)
Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
And it worked fine. :-|
Well, so now we know that earlier versions of FileChannelImpl have a memory allocation problem.
I'm on Ubuntu 10.04 and I can confirm your observation. My JDK is:
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.9) (6b20-1.9.9-0ubuntu1~10.04.2)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
The solution is to use a DirectByteBuffer
, not a HeapByteBuffer
which is backed by an array.
This is a very old "feature" dating back to JDK 1.4 if I remember correctly: If you don't give a DirectByteBuffer
to a Channel
, then a temporary DirectByteBuffer
is allocated and the contents are copied before writing. You basically see these temporary buffers lingering in the JVM.
The following code works for me:
File file = new File("fileChannelTest.log");
FileOutputStream fos = new FileOutputStream(file);
FileChannel fileChannel = fos.getChannel();
ByteBuffer bb1 = ByteBuffer.wrap("This is a log line to test!\n".getBytes());
ByteBuffer bb2 = ByteBuffer.allocateDirect(bb1.remaining());
bb2.put(bb1).flip();
bb2.mark();
long freeMemory = Runtime.getRuntime().freeMemory();
for (int i = 0; i < 1000000; i++) {
bb2.reset();
fileChannel.write(bb2);
}
System.out.println("Memory allocated: " + (freeMemory - Runtime.getRuntime().freeMemory()));
Just for reference: The copy of the HeapByteBuffer
is taken in
sun.nio.ch.IOUtil.write(FileDescriptor, ByteBuffer, long, NativeDispatcher, Object)
which uses sun.nio.ch.Util.getTemporaryDirectBuffer(int)
. That in turn implements a little per-thread pool of DirectByteBuffer
s using SoftReference
s. So there is no real memory leak but only wastage. sigh
精彩评论