开发者

vectorized approach to binning with numpy/scipy in Python

I am binning a 2d array (x by y) in Python into the bins of its x value (given in "bins"), using np.digitize:

elements_to_bins = digitize(vals, bins)

where "vals" is a 2d array, i.e.:

 vals = array([[1, v1], [2, v2], ...]). 

elements_to_bins just says what bin each element falls into. What I then want to do is get a list whose length is the number of bins in "bins", and each element returns the y-dimension of "vals" that falls into that bi开发者_如何学Gon. I do it this way right now:

points_by_bins = []
for curr_bin in range(min(elements_to_bins), max(elements_to_bins) + 1):
    curr_indx = where(elements_to_bins == curr_bin)[0]
    curr_bin_vals = vals[:, curr_indx]
    points_by_bins.append(curr_bin_vals)

is there a more elegant/simpler way to do this? All I need is a list of of lists of the y-values that fall into each bin.

thanks.


If I understand your question correctly:

vals = array([[1, 10], [1, 11], [2, 20], [2, 21], [2, 22]])  # Example

(x, y) = vals.T  # Shortcut
bin_limits = range(min(x)+1, max(x)+2)  # Other limits could be chosen
points_by_bin = [ [] for _ in bin_limits ]  # Final result
for (bin_num, y_value) in zip(searchsorted(bin_limits, x, "right"), y):  # digitize() finds the correct bin number
    points_by_bin[bin_num].append(y_value)

print points_by_bin  # [[10, 11], [20, 21, 22]]

Numpy's fast array operation searchsorted() is used for maximum efficiency. Values are then added one by one (since the final result is not a rectangular array, Numpy cannot help much, for this). This solution should be faster than multiple where() calls in a loop, which force Numpy to re-read the same array many times.


This will return a data structure analogous to IDL HISTOGRAM's Reverse_Indices:

ovec = np.argsort(vals)
ivec = np.searchsorted(vals, bin_limits, sorter=ovec)

Then the list of elements that fall into bin #i is

ovec[ ivec[i] : ivec[i+1] ]

(my quick timing tests say this is 5x faster than EOL's algorithm, since it doesn't bother creating different-sized lists)


Are the bin keys just integers, no binning, as in your example ? Then you could just do this, without numpy:

from collections import defaultdict
bins = defaultdict(list)  # or [ [] ...] as in EOL

vals = [[1, 10], [1, 11], [2, 20], [2, 21], [2, 22]]  # nparray.tolist()
for nbin, val in vals:
    bins[nbin].append(val)

print "bins:", bins
# defaultdict(<type 'list'>, {1: [10, 11], 2: [20, 21, 22]})
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜