开发者

hdf5 and ndarray append / time-efficient approach for large data-sets

Background

I have a k n-dimensional time-series, each represented as m x (n+1) array holding float values (n col开发者_如何学编程umns plus one that represents the date).

Example:

k (around 4 million) time-series that look like

20100101    0.12    0.34    0.45    ...
20100105    0.45    0.43    0.21    ...
...         ...     ...     ... 

Each day, I want to add for a subset of the data sets (< k) an additional row. All datasets are stored in groups in one hd5f file.

Question

What is the most time-efficient approach to append the rows to the data sets?

Input is a CSV file that looks like

key1, key2, key3, key4, date, value1, value2, ... 

whereby date is unique for the particular file and could be ignored. I have around 4 million data sets. The issue is that I have to look-up the key, get the complete numpy array, resize the array, add the row and store the array again. The total size of the hd5f file is around 100 GB. Any idea how to speed this up? I think we can agree that using SQLite or something similar doesn't work - as soon as I have all the data, an average data set will have over 1 million elements times 4 million data sets.

Thanks!


Have you looked at PyTables? It's a hierarchical database built on top of the HDF5 library.

It has several array types, but the "table" type sounds like it would work for your data format. It's basically an on-disk version of a NumPy record array, where each column can be a unique data type. Tables have an append method that can easily add additional rows.

As far as loading the data from CSV files, numpy.loadtxt is quite fast. It will load the file into memory as a NumPy record array.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜