Tips for creating a very large database of hashes
The question: What solution or tips would you have to deal with a very large (multi terabytes) database indexed on strong hashes with high redundancy?
Some kind of inverted storage?
Is there something that could be done with Postgres?
I am ready to roll my own storage if needed.
(Hint: Must be open source, no Java, must run on Linux, must be disk-based, C/C++/Python preferred)
The details:
I need to create a very large database where each record has:
- some arbitrary meta data (some text fields) including some primary key
- one hashes (128 bits hash, strong MD5-like)
The volume of records is what I would qualify as quite large: several 10 to 100's billions). There is a significant redundancy of hashes across rows (over 40% of the records have their hash shared with at least another record, some hash exist in 100K records)
The primary usage is to lookup by hash, then retrieve the metadata. The secondary usage is to lookup by primary key, then retrieve the metadata.
This is an analytics-type database, so the overall load is medium, mostly read, few writes, mostly batched writes.
The current approach is to use Postgres, with an index on the primary key and an index on the hash column. The table is loaded in batch with the index on the hash turned off.
All indexes are btrees. The index on the hash column is growing huge, as big or bigger than the table itself. On a 120 GB table it takes about a day to recreate the index. The query performances are quite good though.
The problem is that the projected size for the target database will be over 4TB based 开发者_运维技巧on tests with a smaller data set of 400GB representing about 10% of the total target. Once loaded in Postgres, over 50% of the storage is unfortunately being used by the SQL index on the hash column.
This is way too big. And I feel that the redundancy in hashes is an opportunity for storing less.
Note also that while this describes the problem, there are a few of these tables that needs to be created.
You could create a table with only id and Hash, and your other data with index, Metadata, and hashId. Doing so, you can prevent writing the same hash up to 100k times in the table.
精彩评论