开发者

Approximate median of an immutable array

I need to find a median value of an array of doubles (in Java) without modifying it (so selection is out) or allocating a lot of new memory. I also don't care to find the exact median, but within 10% is fine (so if median splits the sorted array 40%-60% it's fine).

How can I achieve this efficiently?

Taking into account suggestions from rfreak, ILMTitan and Peter I wrote this code:

public static double median(double[] array) {
    final int smallArraySize = 5000;
    final int bigArraySize = 100000;
    if (array.length < smallArraySize + 2) { // small size, so can just sort
        double[] arr = array.clone();
        Arrays.sort(arr);
        return arr[arr.length / 2];
    } else if (array.length > bigArraySize) { // large size, don't want to make passes
        double[] arr = new double[smallArraySize + 1];
        int factor = array.length / arr.length;
        for (int i = 0; i < arr.length; i++)
            arr[i] = array[i * factor];
        return median(arr);
    } else { // average size, can sacrifice time for accuracy
        final int buckets = 1000;
        final double desiredPrecision = .005; // in percent
        final int maxNumberOfPasses = 10; 
        int[] histogram = new int[buckets + 1];
        int acceptableMin, acceptableMax;           
        double min, max, range, scale,
            medianMin = -Double.MAX_VALUE, medianMax = Double.MAX_VALUE;
        int sum, numbers, bin, neighborhood = (int) (array.length * 2 * desiredPrecision);
        for (int r = 0; r < maxNumberOfPasses; r ++) { // enter search for number around median
            max = -Double.MAX_VALUE; min = Double.MAX_VALUE; 
            numbers = 0;
            for (int i = 0; i < array.length; i ++)
                if (array[i] > medianMin && array[i] < medianMax) {
                    if (array[i] > max) max = array[i];
                    if (array[i] < min) min = array[i];
                    numbers ++;
                }
            if (min == max) return min;
            if (numbers <= neighborhood) return (medianMin + medianMax) / 2;
     开发者_如何学编程       acceptableMin = (int) (numbers * (50d - desiredPrecision) / 100);
            acceptableMax = (int) (numbers * (50d + desiredPrecision) / 100);
            range = max - min;
            scale = range / buckets;
            for (int i = 0; i < array.length; i ++)
                histogram[(int) ((array[i] - min) / scale)] ++;
            sum = 0;
            for (bin = 0; bin <= buckets; bin ++) {
                sum += histogram[bin];
                if (sum > acceptableMin && sum < acceptableMax)
                    return ((.5d + bin) * scale) + min;
                if (sum > acceptableMax) break; // one bin has too many values
            }
            medianMin = ((bin - 1) * scale) + min;
            medianMax = (bin * scale) + min;
            for (int i = 0; i < histogram.length; i ++)
                histogram[i] = 0;
        }
        return .5d * medianMin + .5d * medianMax;
    }       
}

Here I take into account the size of the array. If it's small, then just sort and get the true median. If it's very large, sample it and get the median of the samples, and otherwise iteratively bin the values and see if the median can be narrowed down to an acceptable range.

I don't have any problems with this code. If someone sees something wrong with it, please let me know.

Thank you.


Assuming you mean median and not average. Also assuming you are working with fairly large double[], or memory wouldn't be an issue for sorting a copy and performing an exact median. ...

With minimal additional memory overhead you could probably run a O(n) algorithm that would get in the ballpark. I'd try this and see how accurate it is.

Two passes.

First pass find the min and max. Create a set of buckets that represent evenly spaced number ranges between the min and max. Make a second pass and "count" how many numbers fall in each bin. You should then be able to make a reasonable estimate of the median. Using 1000 buckets would only cost 4k if you use int[] to store the buckets. The math should be fast.

The only question is accuracy, and I think you should be able to tune the number of buckets to get in the error range for your data sets.

I'm sure someone with a better math/stats background than I could provide a precise size to get the error range you are looking for.


Pick a small number of array elements at random, and find the median of those.


Following on from the OPs question about; how to extract N values from a much larger array.

The following code shows how long it takes to find the median of a large array and then shows how long it take to find the median of a fixed size selection of values. The fixed size selection has a fixed cost, but is increasingly inaccurate as the the size of the original array grows.

The following prints

Avg time 17345 us. median=0.5009231700563378
Avg time 24 us. median=0.5146687617507585

the code

double[] nums = new double[100 * 1000 + 1];
for (int i = 0; i < nums.length; i++) nums[i] = Math.random();

{
    int runs = 200;
    double median = 0;
    long start = System.nanoTime();
    for (int r = 0; r < runs; r++) {
        double[] arr = nums.clone();
        Arrays.sort(arr);
        median = arr[arr.length / 2];
    }
    long time = System.nanoTime() - start;
    System.out.println("Avg time " + time / 1000 / runs + " us. median=" + median);
}
{
    int runs = 20000;
    double median = 0;
    long start = System.nanoTime();
    for (int r = 0; r < runs; r++) {
        double[] arr = new double[301]; // fixed size to sample.
        int factor = nums.length / arr.length; // take every nth value.
        for (int i = 0; i < arr.length; i++)
            arr[i] = nums[i * factor];
        Arrays.sort(arr);
        median = arr[arr.length / 2];
    }
    long time = System.nanoTime() - start;
    System.out.println("Avg time " + time / 1000 / runs + " us. median=" + median);
}

To meet your requirement of not creating objects, I would put the fixed size array in a ThreadLocal so there is no ongoing object creation. You adjust the size of the array to suit how fast you want the function to be.


1) How much is a lot of new memory? Does it preclude a sorted copy of the data, or of references to the data?

2) Is your data repetitive (are there many distinct values)? If yes, then your answer to (1) is less likely to cause problems, because you may be able to do something with a lookup map and an array: e.g. Map and an an array of short and a suitably tweaked comparison object.

3) The typical case for the your "close to the mean" approximation is more likely to be O(n.log(n)). Most sort algorithms only degrade to O(n^2) with pathological data. Additionally, the exact median is only going to be (typically) O(n.log(n)), assuming you can afford a sorted copy.

4) Random sampling (a-la dan04) is more likely to be accurate than choosing values near the mean, unless your distribution is well behaved. For example poisson distribution and log normal both have different medians to means.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜