开发者

Compressing Redundant Text Data for SQL. A Fixed Dictionary?

Our application processes data according to a structure of rules configured by the client. During processing, a verbose "log" is generated at every step so the user can understand the reasons & logic behind the final processing result... and what limits (or rules or whatever) came into play.

How would you store this data in the DB?

I'd bet that 95%+ of the data is redundant from one log record to the next. I ran an LZMA on the combined text from 100 records and the output was 2% in size.

The text is only retrieved for display by Primary Key. It's never queried for filtering or search purposes. The text averages around 25k for each record.

If I compress the text for each record, I'll be at ~10% compression... vs. 2% compression (for the combined 100 records).

Ideally, I'd like to use some sort of fixed dictionary generated f开发者_运维百科rom the huge amount of existing data.

We're using SQL 2005. I know that SQL 2008 has row & page level compression options.. but getting our entire client base to upgrade isn't feasible at this time.

Thoughts? thanks!

UPDATE: Here's what I've done. After a week of reading an experimenting, I wrote a procedure to generate an LZW style string dictionary on the combined text of 1000 records. I then prioritized the dictionary in a variety of ways including: - Expected savings overall (in bytes, through substitution) - Expected savings, only including dictionary entries present 1 or fewer times per record.

I ran a simple substitution of the highest priority X (between 100 & 1000) dictionary entries on a sample Record. Then used an LZMA alg. to compress the encoded output.

By playing with different configurations for the dictionary... I found that at best, I can improve the LZMA compression by approx 1%. In most instances, I introduce more entropy than I pull out, so the encoded, LZMA compressed data is larger than the original data compressed w/ LZMA.

I've determined that there are more redundancies within the text of each record that can be exploited by LZMA, than there are between rows.

So more than likely, I'll just LZMA all the text and call it a day.


The only way I can think of accomplishing this type of compression in SQL 2005 would be to create a custom framework with your own SQL CLR objects. This would be a pretty complicated solution but it may work for your purposes. The upgrade to SQL 2008 might be a lot easier and cost effective.

SQL CLR functions and/or triggers could be used to manage compression & decompression operations on the table in question... performance might be less than optimal, I don't know. You would also need some sort of dictionary management utilities. Some sort of scheduled maintenance might be created that is responsible for updating and optimizing the fixed dictionary regularly (if needed).

Although this is not a direct solution to your problem, I do think you might find the following article on Code Project interesting -

Using CLR integration to compress BLOBs/CLOBs in SQL Server 2005

As you can see, the author of the article is using SQL CLR in a very clever way to solve a different compression problem on SQL 2005.


If it's mostly the same text maybe a more relational approach could be taken whereby you store the range of message outputs in the db and have a table with the messageID and the different paramaters that make the message unique?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜