开发者

Proftpd verify complete upload

I was wondering whether there was a best practice for checking if an upload to your ftp server was successful.

The system I'm working with has an upload directory which contains subdirectories for every user where the files are uploaded.

Files in these directories are only temporary, they're disposed of once handled.

The system loops through each of these subdirectories and new files in them and for each file checks whether it's been modified for开发者_如何学Python 10 seconds. If it hasn't been modified for 10 seconds the system assumed the file was uploaded successfully.

I don't like the way the system currently handles these situations, because it will try and handle the file and fail if the file upload was incomplete, instead of waiting and allowing the user to resume the upload until it's complete. It might be fine for small files which doesn't take a lot of time to upload, but if the file is big I'd like to be able to resume the upload.

I also don't like the loops of directories and files, the system idles at a high cpu usage, so I've implemented pyinotify to trigger an action when a file is written. I haven't really looked at the source code, I can only assume it is more optimized than the current implementation (which does more than I've described).

However I still need to check whether the file was successfully uploaded.

I know I can parse the xferlog to get all complete uploads. Like:

awk '($12 ~ /^i$/ && $NF ~ /^c$/){print $9}' /var/log/proftpd/xferlog

This would make pyinotify unnecessary since I can get the path for complete and incomplete uploads if I only tail the log.

So my solution would be to check the xferlog in my run-loop and only handle complete files.

Unless there's a best practice or simply a better way to do this?

What would the disadvantages be with this method?

I run my app on a debian server and proftpd is installed on the same server. Also, I have no control over clients sending the file.


Looking at the proftpd docs, I see http://www.proftpd.org/docs/directives/linked/config_ref_HiddenStores.html

The HiddenStores directive enables two-step file uploads: files are uploaded as ".in.filename." and once the upload is complete, renamed to just "filename". This provides a degree of atomicity and helps prevent 1) incomplete uploads and 2) files being used while they're still in the progress of being uploaded.

This should be the "better way" to solve the problem when you have control of proftpd as it handles all the work for you - you can assume that any file which doesn't start .in. is a completed upload. You can also safely delete any orphan .in.* files after some arbitrary period of inactivity in a tidy-up script somewhere.


You can use pure-uploadscript if your pure-ftpd installation was compiled with --with-uploadscript option.

It is used to launch a specified script after every upload is completely finished.

  1. Set CallUploadScript to "yes"
  2. Make a script with a command like touch /tmp/script.sh
  3. Write the code in it. In my example the script renames the file and adds ".completed" before the file name:

    #!/bin/bash fullpath=$1 filename=$(basename "$1") dirname=${fullpath%/*} mv "$fullpath" "$dirname/completed.$filename"

  4. Run chmod 755 /tmp/script.shto make the script executable by pure-uploadscript

  5. Then run a command pure-uploadscript -B -r /etc/pure-ftpd/uploadscript.sh

Now /tmp/script.sh will be launched after each completed upload.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜