How to efficiently store ever changing datasets (search results) for periodic reports
I am having trouble coming up with a good way to store a dataset that continually changes.
I want to track and periodically report on the contents of specific websites. For example, for a certain website I want to keep track of all the PDF documents that are available. Then I want to report periodically (say, quarterly) on the number of documents, PDF version number and various other statistics. In addition, I want to track the change of these metric over time. E.g. I want to graph the increase in PDF documents offered on the website over time.
My input is basically a long list of URLs that point to all the PDF documents on the website. These inputs arrive intermittently, but they may not coincide with the dates I want to run the reports on. For example, in Q4 2010 I may get two lists of URLs, several weeks apart. In Q1 2011 I may get just one.
I am having trouble figuring out how to efficiently store this input data in a database of some sorts so that I can easily generate the correct reports.
On the one hand, I could simply insert the complete list into a table each time I recieve a new list, along with a date of import. But I fear that the table will grow quite big in a short time, and most of it will be duplicate URLs.
But, on the other hand I fear that it may get quite complicated to maintain a list of unique URLs or documents. Especially when documents are added, removed and then re-added over time. I fear I might get into the complexities of creating a temporal database. And I shudder to think what happens when the document itself is updated but the URL stays the same (in that case the metadata might change, such as the PDF version, file size, etcetera).
开发者_JAVA技巧Can anyone recommend me a good way to store this data so I can generate reports from it? I would especially like to have the ability to retroactively generate reports. E.g, when I want to track a new website in Q1 2011, I would like to be able to generate a report from both the Q4 2010 data as well, even though the Q1 2011 data has already been imported.
Thanks in advance!
Why not just a single table, called something like URL_HISTORY:
URL VARCHAR (PK)
START_DATE DATE (PK)
END_DATE DATE
VERSION VARCHAR
Have END_DATE as either NULL or a suitable dummy date (eg. 31-Dec-9999) where the version has not been superceded; set END_DATE to be the last valid date where the version has been superceded, and create a new record for the new version - eg.
+------------------+-------------+--------------+---------+
|URL | START_DATE | END_DATE | VERSION |
|..\Harry.pdf | 01-OCT-2009 | 31-DEC-9999 | 1.1.0 |
|..\SarahJane.pdf | 01-OCT-2009 | 31-DEC-2009 | 1.1.0 |
|..\SarahJane.pdf | 01-JAN-2010 | 31-DEC-9999 | 1.1.1 |
+------------------+-------------+--------------+---------+
What about using a document database and instead of saving each url you save a document that has a collection of urls. At this point whenever you execute whatever process that iterates over all the urls you get all of the documents that existing a time frame or whatever qualifications you have on that and then run all of the urls across each of the documents.
This could also be emulated in sql server by just serializing your object to json or xml and storing the output in a fitting column.
精彩评论