开发者

Do we log duplicate data in tables or keep joining on all future reads?

Say a user activity table. At a minimum you would need stuff like user_id, datetime, activity_id, object_id, etc. I can join up with object table to find the object owner. I can join with activity table to find the activity group, type, etc.

OR

I can copy those details at runti开发者_JAVA百科me into the activity table also. This only means duplicate data but in the future whenever i need to read i do not have to keep joining. I have all my data now in the activity table for all possible pieces of data.

If i do duplicate data, do i duplicate it with FK or standalone?


Certainly Databases should be Normalised, because they perform better, SQL was designed for Normalised structures, and for joining them. There is no reason to "de-normalise".

However, there is one case which has an exceptional need. History tables or Log fies. The question to be considered is, when you query this table, do you need the current data from the parent object; or is this a true log of what happened at the time, and you need the data that was current at the time the log row was created.

In the latter case, since there is an explicit purpose in storing such data, it is not actually a duplicate, it is not a reversal of the Normalisation; therefore, it is incorrect to classify the data as duplicate or "denormalised". It is just Audit data, which must be retained. Generally Log files (that's what they really are; we store them in the database for convenience) are not part of the database, and database rules do not apply.

But always implement indices on them, and purge them regularly; otherwise the turn into monsters.

The alternative to Log files is History tables. Rather than a Log file for actions, this is implemented on a table basis, as needed. For each table for which an Audit of changes must be retained, a "copy" of the table is implemented. This stores the before-image of rows that have been changed. The DDL is exactly the same as the source table, plus one item: the PK has a TIMESTAMP or DATETIME column added. There too, it is by explicit requirement, and it would be incorrect to classify these tables as duplicate or "de-normalised".

History tables are superior to Log files, because they store only the rows that have been changed; whereas Log files store unchanging data repeatedly. The resulting volumes of data stored are quite different.

▶Example of History Tables◀


What you are talking about is called database normalization (or in your case de-normalization).

Since a relational database is pretty much designed to perform joins, there is usually VERY little reason to de-normalize and store duplicate data in one big wide table (there are some edge cases and tradeoffs, but in the example you used they don't really apply, so I recommend sticking with 3-table approach you listed (activity, user_activity, object).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜