Python - efficient representation of pixels and associated values
I'm using python to work with large-ish (approx 2000 x 2000) matrices, where each I
, J
point in the matrix represents a single pixel.
The matrices themselves are sparse (ie a substantial portion of them will have zero values), but when they are updated they tend to be increment operations, to a large number of adjacent pixels in a rectangular 'block', rather than random pixels here or there (a property i do not currently use to my advantage..).
Afraid a bit new to matrix arithmetic, but I've looked into a number of possible solutions, including the various flavours of scipy
sparse matrices. So far co-ordinate (COO) matrices seem to be the most promising.
So for instance where I want to increment one block shape, I'd have to do something along the lines of:
>>> from scipy import sparse
>>> from numpy import array
>>> I = array([0,0,0,0])
>>> J = array([0,1,2,3])
>>> V = array([开发者_运维知识库1,1,1,1])
>>> incr_matrix = sparse.coo_matrix((V,(I,J)),shape=(100,100))
>>> main_matrix += incr_matrix #where main_matrix was previously defined
In the future, i'd like to have a richer pixel value representation in anycase (tuples to represent RGB etc), something that numpy array doesnt support out of the box (or perhaps I need to use this).
Ultimately i'll have a number of these matrices that I would need to do simple arithmitic on, and i'd need the code to be as efficient as possible -- and distributable, so i'd need to be able to persist and exchange these objects in a small-ish representation without substantial penalties. I'm wondering if this is the right way to go, or should I be looking rolling my own structures using dicts
etc?
The general rule is, get the code working first, then optimize if needed...
In this case, use a normal numpy 2000x2000 array, or 2000x2000x3 for RGB. This will be much easier and faster to work with, is only a small memory requirement, and has many other advantages, for example, you can use the standard image processing tools, etc.
Then, if needed, "to persist and exchange these objects", you can just compress them using gzip, pytables, jpeg, or whatever, but there's no need to limit your data manipulation based storage requirements.
This way you get both faster processing and better compression.
I would say, yes, this is the way to go. Definitely over building something out of dictionaries! When building a "vector", array, then use a structured array, i.e. define your own dtype:
rgbtype = [('r','uint8'),('g','uint8'),('b','uint8')]
when incrementing your blocks, it will look something like this:
main_matrix['r'][blk_slice] += incr_matrix['r']
main_matrix['g'][blk_slice] += incr_matrix['g']
main_matrix['b'][blk_slice] += incr_matrix['b']
Update:
It looks like you can't do matrix operations with a coo_matrix, they exist simply as a convenient way to populate a sparse matrix. You have to convert them to another (sparse) matrix type before doing the updates. documentation
You might want to consider looking into a quadtree as an implementation. The quadtree structure is pretty efficient at storing sparse data, and has the added advantage that if you're working with structures composed of lots of blocks of similar data the representation can be very compact. I'm not sure if this will be particularly applicable to what you're doing, since I don't know what you mean by "working in blocks," but it's certainly worth checking out as an alternative sparse matrix implementation.
精彩评论