开发者

Java: Issue with available() method of BufferedInputStream

I'm dealing with the following code that is used to split a large file into a set of smaller files:

FileInputStream input = new FileInputStream(this.fileToSplit);
            BufferedInputStream iBuff = new BufferedInputStream(input);
            int i = 0;

            FileOutputStream output = new FileOutputStream(fileArr[i]);
            BufferedOutputStream oBuff = new BufferedOutputStream(output);

            int buffSize = 8192;
            byte[] buffer = new byte[buffSize];
            while (true) {
                if (iBuff.available() < buffSize) {
                    byte[] newBuff = new byte[iBuff.available()];
                    iBuff.read(newBuff);
                    oBuff.write(newBuff);
                    oBuff.flush();
                    oBuff.close();

                    break;
                }
                int r = iBuff.read(buffer);

                if (fileArr[i].length() >= this.partSize) {
                    oBuff.flush();
                    oBuff.close();
                    ++i;
                    output = new FileOutputStream(fileArr[i]);
                    oBuff = new BufferedOutputStream(output);
                }
                oBuff.write(buffer);
            }

        } catch (Exception e) {
            e.printStackTrace();
        }

This is the weird behavior I'm seeing... when I run this code using a 3GB file, the initial iBuff.available() call returns a value of a approximatley 2,100,000,000 and the code works fine. When I run this code on a 12GB file, the initial iBuff.available() call only returns a value of 200,000,000 (which is smaller than the split file size of 500,000,000 and causes the processing to go awry).

I'm thinking this discrepancy in behvaior has something to do with the fact that this is on 32-bit windows. I'm going to run a couple more tests on a 4.5 GB file and a 3.5 GB file. If the 3.5 file works and the 4.5 one doesn't, that will further confirm the the开发者_开发知识库ory that it's a 32bit vs 64bit issue since 4GB would then be the threshold.


Well if you read the javadoc it quite clearly states:

Returns the number of bytes that can be read from this input stream without blocking (emphasis added by me)

So it's quite clear that what you want is not what this method offers. So depending on the underlying InputStream you may get problems much earlier (eg a stream over the network with a server that doesn't return the filesize - you'd have to read the complete file and buffer it just to return the "correct" available() count, which would take a lot of time - what if you only want to read a header?)

So the correct way to handle this is to change your parsing method to be able to handle the file in pieces. Personally I don't see much reason at all to even use available() here - just calling read() and stopping as soon as read() returns -1 should work fine. Can be made more complicated if you want to assure that every file really contains blockSize byte - just add an internal loop if that scenario is important.

int blockSize = XXX;
byte[] buffer = new byte[blockSize];
int i = 0;
int read = in.read(buffer);
while(read != -1) {
   out[i++].write(buffer, 0, read);
   read = in.read(buffer);
} 


There are few correct uses of available(), and this isn't one of them. You don't need all that junk. Memorize this:

int count;
byte[] buffer = new byte[8192]; // or more
while ((count = in.read(buffer)) > 0)
  out.write(buffer, 0, count);

That's the canonical way to copy a stream in Java.


You should not use the InputStream.available() function at all. It is only needed in very special circumstances.

You should also not create byte arrays that are larger than 1 MB. It's a waste of memory. The commonly accepted way is to read a small block (4 kB up to 1 MB) from the source file and then store only as many bytes as you have read in the destination file. Do that until you have reached the end of the source file.


available isn't a measure of how much is still to be read but more a measure how much is guaranteed to be able to read before it might EOF or block waiting for input

and put close calls in the finallies

     BufferedInputStream iBuff = new BufferedInputStream(input);
     int i = 0;

     FileOutputStream output;
     BufferedOutputStream oBuff=0;
     try{
        int buffSize = 8192;
        int offset=0;
        byte[] buffer = new byte[buffSize];
        while(true){
            int len = iBuff.read(buffer,offset,buffSize-offset);
            if(len==-1){//EOF write out last chunk
               oBuff.write(buffer,0,offset);
               break;
            }
            if(len+offset==buffSize){//end of buffer write out to file
               try{
                  output = new FileOutputStream(fileArr[i]);
                  oBuff = new BufferedOutputStream(output);
                  oBuff.write(buffer);
               }finally{
                  oBuff.close();
               }
               ++i;
               offset=0;
            }
            offset+=len;
        }//while
     }finally{
         iBuff.close();
     }


Here is some code that splits a file. If performance is critical to you, you can experiment with the buffer size.

package so6164853;

import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Formatter;

public class FileSplitter {

  private static String printf(String fmt, Object... args) {
    Formatter formatter = new Formatter();
    formatter.format(fmt, args);
    return formatter.out().toString();
  }

  /**
   * @param outputPattern see {@link Formatter}
   */
  public static void splitFile(String inputFilename, long fragmentSize, String outputPattern) throws IOException {
    InputStream input = new FileInputStream(inputFilename);
    try {
      byte[] buffer = new byte[65536];
      int outputFileNo = 0;
      OutputStream output = null;
      long writtenToOutput = 0;

      try {
        while (true) {
          int bytesToRead = buffer.length;
          if (bytesToRead > fragmentSize - writtenToOutput) {
            bytesToRead = (int) (fragmentSize - writtenToOutput);
          }

          int bytesRead = input.read(buffer, 0, bytesToRead);
          if (bytesRead != -1) {
            if (output == null) {
              String outputName = printf(outputPattern, outputFileNo);
              outputFileNo++;
              output = new FileOutputStream(outputName);
              writtenToOutput = 0;
            }
            output.write(buffer, 0, bytesRead);
            writtenToOutput += bytesRead;
          }
          if (output != null && (bytesRead == -1 || writtenToOutput == fragmentSize)) {
            output.close();
            output = null;
          }
          if (bytesRead == -1) {
            break;
          }
        }
      } finally {
        if (output != null) {
          output.close();
        }
      }
    } finally {
      input.close();
    }
  }

  public static void main(String[] args) throws IOException {
    splitFile("d:/backup.zip", 1440 << 10, "d:/backup.zip.part%04d");
  }
}

Some remarks:

  • Only those bytes that have actually been read from the input file are written to one of the output files.
  • I left out the BufferedInputStream and BufferedOutputStream since their buffer's size is only 8192 bytes, which less than the buffer I use in the code.
  • As soon as I open a file, I make sure that it will be closed at the end, no matter what happens. (The finally blocks.)
  • The code contains only one call to input.read and only one call to output.write. This makes it easier to check for correctness.
  • The code for splitting a file does not catch the IOException, since it doesn't know what to do in such a case. It is just passed to the caller; maybe the caller knows how to handle it.


Both @ratchet and @Voo are correct. As for what is happening. int max value is 2,147,483,647 (http://download.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html). 14 gigabytes is 15,032,385,536 which clearly don't fit an int. See that according to the API Javadoc (http://download.oracle.com/javase/6/docs/api/java/io/BufferedInputStream.html#available%28%29) and as stated by @Voo, this don't break the method contract at all (just isn't what you are looking for).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜