开发者

Optimising file download

So I've got the following method for downloading files from Amazon S3 and for now it is working but I anticipated that in the future I'll have to deal with considerably large files - 2-3 gigabytes. So what performance optimizations would you recommend? Also links regarding some GENERAL ideas about file I/O in java applicable not only to my case but in general will be much appreciated.

public static void fetchFileFromS3(String filePath, String outPath) {
    int size = 5 * 1024 * 1024; //use 5 megabytes buffers
    byte bufSize[] = new byte[size];  
    FileOutputStream fout = null;
    BufferedOutputStream bufOut = null;
    BufferedInputStream bufIn = null;
    String[] result = getRealPath(filePath);
    S3Object object = Utilities.getS3Instance().getObject(new GetObjectRequest(result[0], result[1]));

    try {
        fout = new FileOutputStream(outPath);
        bufOut = new BufferedOutputStream(fout, size);
        bufIn = new BufferedInputStream(object.getObjectContent(), size);
        int bytesRead = 0;
        while((bytesRead = bufIn.read(bufSize)) != -1) {

            bufOut.write(bufSize, 0, bytesRead);


        }

        System.out.println("Finished downloading file");

        bufOut.flush();
        bufOut.close();
        bufIn.close();

    } catch (IOException ex) {
        Logger.get开发者_C百科Logger(Utilities.class.getName()).log(Level.SEVERE, null, ex);
    }
}


I think looking into the new-ish Java NIO API's makes sense, even though there's some disagreement about whether they're more efficient in large files.

For example, in the answer to this question using chunked memory-mapping with NIO seems like it might do the trick.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜