开发者

Most reliable way to transfer large files to a remote server via Java?

I'm building a Java application that will allow our users to load a list of files and have those files transferred to our server for video encoding. I've already built an API for managin开发者_JS百科g the files before and after they've been transferred, but I need to decide on a good transfer protocol for actually moving the files.

Right now I'm leaning towards using the Apache Commons Net ( see: http://commons.apache.org/net/ ) package along with FTP to move the files from the client computer to the server. Once there I'll use secure API calls to move the files to wherever they need to go.

Is this the best route? Is there a better way to reliably transfer large (1 GB+) files? Is there a way to resume a broken download using this methodology? I'd like to avoid traditional HTTP POST requests as they're unreliable and cannot resume broken uploads.

Thanks!


You didn't mention if using Amazon S3 is an option for your solution, but they do offer native partial upload support. The basic workflow is:

  1. Create an upload-placeholder and hold on to the response key
  2. Upload chunks -- can be concurrent and retried as necessary
  3. Use the response key to combine the chunks into a single file

Their SDK offers built-in file slicing and chunk upload.

Even if S3 is not the final location, you could use S3 as an upload holding pen and download the file at your convenience for permanent storage.


Your understanding of HTTP post is not exactly correct. HTTP standard doesn't restrict range requests to only GET method - one can use them with POST or PUT as well.

Also if you have control over both client and server-side script, you can post both data chunk and the StartAt position as a separate parameter. On the server you check the parameter and append the received data at specified position.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜