开发者

Growing ByteBuffer

Has anyone has ever seen an implementation of java.nio.ByteBuffer that will grow dynamically if a putX() call overruns the capacity?

The reason I want to do it this way i开发者_运维技巧s twofold:

  1. I don't know how much space I need ahead of time.
  2. I'd rather not do a new ByteBuffer.allocate() then a bulk put() every time I run out of space.


In order for asynchronous I/O to work, you must have continuous memory. In C you can attempt to re-alloc an array, but in Java you must allocate new memory. You could write to a ByteArrayOutputStream, and then convert it to a ByteBuffer at the time you are ready to send it. The downside is you are copying memory, and one of the keys to efficient IO is reducing the number of times memory is copied.


A ByteBuffer cannot really work this way, as its design concept is to be just a view of a specific array, which you may also have a direct reference to. It could not try to swap that array for a larger array without weirdness happening.

What you want to use is a DataOutput. The most convenient way is to use the (pre-release) Guava library:

ByteArrayDataOutput out = ByteStreams.newDataOutput();
out.write(someBytes);
out.writeInt(someInt);
// ...
return out.toByteArray();

But you could also create a DataOutputStream from a ByteArrayOutputStream manually, and just deal with the spurious IOExceptions by chaining them into AssertionErrors.


Another option is to use direct memory with a large buffer. This consumes virtual memory but only uses as much physical memory as you use (by page which is typically 4K)

So if you allocate a buffer of 1 MB, it comsumes 1 MB of virtual memory, but the only OS gives physical pages to the application which is actually uses.

The effect is you see your application using alot of virtual memory but a relatively small amount of resident memory.


Have a look at Mina IOBuffer https://mina.apache.org/mina-project/userguide/ch8-iobuffer/ch8-iobuffer.html which is a drop in replacement (it wraps the ByteBuffer)

However , I suggest you allocate more than you need and don't worry about it too much. If you allocate a buffer (esp a direct buffer) the OS gives it virtual memory but it only uses physical memory when its actually used. Virtual memory should be very cheap.


It may be also worth to have a look at Netty's DynamicChannelBuffer. Things that I find handy are:

  • slice(int index, int length)
  • unsigned operations
  • separated writer and reader indexes


Indeed, auto-extending buffers are so much more intuitive to work with. If you can afford the performance luxury of reallocation, why wouldn't you!?

Netty's ByteBuf gives you exactly this. It's like they've taken java.nio's ByteBuffer and scraped away the edges, making it much easier to use.

Furthermore, it's on Maven in an independent netty-buffer package so you don't need to include the full Netty suite to use.


I'd suggest using an input stream to receive data from a file (with a sperate thread if you need non-blocking) then read bytes into a ByteArrayOutstream which gives you the ability to get it as a byte array. Heres a simple example without adding too many workarounds.

    try (InputStream inputStream = Files.newInputStream(
            Paths.get("filepath"), StandardOpenOption.READ)){

        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        int byteRead = 0;

        while(byteRead != -1){
            byteRead = inputStream.read();
            baos.write(byteRead);
        }
        ByteBuffer byteBuffer = ByteBuffer.allocate(baos.size())
        byteBuffer.put(baos.toByteArray());

        //. . . . use the buffer however you want

    }catch(InvalidPathException pathException){
        System.out.println("Path exception: " + pathException);
    }
    catch (IOException exception){
        System.out.println("I/O exception: " + exception); 
    }


Another solution for this would be to allocate more than enough memory, fill the ByteBuffer and then only return the occupied byte array:

Initialize a big ByteBuffer:

ByteBuffer byteBuffer = ByteBuffer.allocate(1000);

After you're done putting things into it:

private static byte[] getOccupiedArray(ByteBuffer byteBuffer)
{
    int position = byteBuffer.position();
    return Arrays.copyOfRange(byteBuffer.array(), 0, position);
}

However, using a org.apache.commons.io.output.ByteArrayOutputStream from the start would probably be the best solution.


Netty ByteBuf is pretty good on that.


A Vector allows for continuous growth

Vector<Byte> bFOO = new Vector<Byte>(); bFOO.add((byte) 0x00);`


To serialize somethiing you will need object in entry. What you can do is put your object in collection of objects, and after that make loop to get iterator and put them in byte array. Then, call ByteBuffer.allocate(byte[].length). That is what I did and it worked for me.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜