NumPy: load heterogenous columns of data from list of strings
I'm working with array data stored in an ASCII file (similar to this thread). My file is at least 2M lines (158 MB), and is divided into multiple sections with different schemas. In my module to read the format, I want to read the whole file via lines = open('myfile.txt', 'r').readlines()
, so I can index the positions of each section, then read each section that I need into NumPy data structures.
For example, one excerpt of a section is:
>>> print lines[5:10]
[' 1 0.1000 0.300E-03 0.000E+00 0.300E-03 0.000E+00 0.000E+00 0.300E-03 0.100E-03\n',
' 2 0.1000 0.120E-02 0.000E+00 0.120E-02 0.000E+00 0.000E+00 0.120E-02 0.100E-03\n',
' 3 0.1000 0.100E-02 0.000E+00 0.100E-02 0.000E+00 0.000E+00 0.100E-02 0.100E-03\n',
' 4 0.1000 0.110E-02 0.000E+00 0.110E-02 0.000E+00 0.000E+00 0.110E-02 0.100E-03\n',
' 5 0.1000 0.700E-03 0.000E+00 0.700E-03 0.000E+00 0.000E+00 0.700E-03 0.100E-03\n']
Which has the schema [int, float, float, float, float, float, float, float, flo开发者_如何学Cat]
, and a later part will have have a simpler [int, float]
schema:
>>> print lines[20:25]
[' 1 0.00000E+00\n',
' 2 0.43927E-07\n',
' 3 0.44006E-07\n',
' 4 0.44020E-07\n',
' 5 0.44039E-07\n']
How can I quickly load in different sections of the lines with NumPy? I see there is np.loadtxt
, but it requires a file handle, and reads all the way to the end. I also see np.from*
functions, but I'm not sure how to use them with my already read lines
. Do I need to read the file twice?
With regards to the heterogeneous data types, I figure I can use a compound dtype
, like np.dtype([('col1', '<i2'), ('col2', 'f4'), ('col3', 'f4'), ('col4', 'f4'), ('col5', 'f4'), ('col6', 'f4'), ('col7', 'f4'), ('col8', 'f4'), ('col9', 'f4')])
, correct?
StringIO
can make file-type objects from strings. So you could do
from StringIO import StringIO
m = np.loadtxt(StringIO('\n'.join(lines[5:10])))
Or even easier, just do
m = np.fromiter(lines[5:10],np.dtype([('col1', '<i2'), ('col2', 'f4'), ('col3', 'f4')]))
精彩评论