开发者

How should I design my PHP upload and resize image script best to catch and report errors

I am writing a PHP script that will sit on a server running in Amazon EC2. It will receive uploaded files, create a record in the database, rename the file to match database id, resize the file, move the file to a new location on server and also PUT the image file on Amazon S3.

At each of these stages there is the possibility of failure which will cause the script to interrupt and if the user is uploading many files the next file will waiting will 开发者_C百科not be uploaded.

So at each of these activities I know I need to catch any errors, record them to be dealt with later and move on to the next image, or report back that a problem has occurred.

I think I want to record failed uploads in my database so that i can get a report of when uploads failed, and record the filename, the username of the uploader and any other info that will allow me to contact the user or if it was at the resize stage that the error occurred for example, resize the image and put it on Amazon S3.

I am not a very experienced PHP coder, are Try Catch blocks suitable for all the above situations. Should I use Try Catch for rename()?

cheers


So I think the most important part of this solution is likely storing the details around the failure event so you can either retry later, debug the problem, or at the very least, contact the user if necessary.

S3 is probably ideal for this - I'd basically write an error handling function which, when called, bundles up all the details around the request (probably all the HTTP POST variables, HTTP request headers, etc) and the image and stores them on S3 for future retrieval. Since your service is running on EC2, the odds of failing to write to S3 is quite low, so this is probably an effective catch-all.

On S3 write failures, I'd do a low # of retries, but not exhaustive since it's so unlikely. You can even loop in a SimpleDB logging mechanism if you'd like to log the fact that you stored on S3, but that's not strictly necessary since you can just list the files in your "error bucket" to see if you've had any errors. By requesting each object, you can also likely see what the problem was.

After that's done, you probably just want to have try/catch wrapped around your other failure points and on failure events, call your store-on-S3 function and move on to the next upload.

If your service takes off, you can further improve on this by making the error handling bit part of your inevitable store-and-queue approach to uploading and processing those uploads. That approach will likely involve always storing the uploaded file on S3 anyway, then queuing the processing requests on SQS, so your error handling function can simply reference the S3 file that's already been stored, rather than having to bundle and store.

Hope that helps!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜