开发者

logrotate-like functionality for database tables/file system files

tl;dr summary: Are there standard solutions for limiting the length of database tables and number of file system files based on number, disk space or time?


I have a Java web service that allows users to run operations that are internally handled as jobs. In order to access results of previously run jobs or asynchronous jobs the user gets a handle in the form of a job ID. I save all this information in a few database tables of a relational database (currently Apache Derby) because it's much more convenient than inventing a new file format (and also probably much more reliable and performant). The actual job results are saved as XML files in the file system.

Job execution may be very frequently (1/s and up) so the tables/directories might get quite large after some time. What I need is a method that allows pruning the job history of the oldest entries based on

  • job count (a maximum of n jobs and their results should be saved)
  • table/directory size (the tables should take at most n GB of space on the hard drive)
  • when the job was 开发者_JS百科run (keep only jobs that completed at most n days ago)

I'm not decided which solution to take yet so the more flexibility the better. I fear when I implement this myself the solution might be quite error prone and it would take some time to get the system robust. The software I'm developing should be able to run for a very long time without any interruption (Ok, whose doesn't...).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜