开发者

Retrieving the top 100 numbers from one hundred million of numbers [duplicate]

This question already has answers here: Write a program to find 100 largest numbers out of an array of 1 billion numbers (33 answers) Closed 5 months ago.

One of my friend has been asked with a question

R开发者_JAVA技巧etrieving the max top 100 numbers from one hundred million of numbers

in a recent job interview. Do you have any idea to come up with an efficient way to solve it?


Run them all through a min-heap of size 100: for each input number k, replace the current min m with max(k, m). Afterwards the heap holds the 100 largest inputs.

A search engine like Lucene can use this method, with refinements, to choose the most-relevant search answers.

Edit: I fail the interview -- I got the details wrong twice (after having done this before, in production). Here's code to check it; it's almost the same as Python's standard heapq.nlargest():

import heapq

def funnel(n, numbers):
    if n == 0: return []
    heap = numbers[:n]
    heapq.heapify(heap)
    for k in numbers[n:]:
        if heap[0] < k:
            heapq.heapreplace(heap, k)
    return heap

>>> funnel(4, [3,1,4,1,5,9,2,6,5,3,5,8])
[5, 8, 6, 9]


Ok, here is a really stupid answer, but it is a valid one:

  • Load all 100 million entries into an array
  • Call some quick sort implementation on it
  • Take last 100 items (it sorts ascending), or first 100 if you can sort descending.

Reasoning:

  • There is no context on the question, so efficiency can be argued - what IS efficient? Computer time or programmer time?
  • This method is implementable very fast.
  • 100 million entries - numbers, are just a couple of hundred mb, so every decent workstaiton can simply run that.

It is an ok solution for some sort of one time operation. It would suck running it x times per second or something. But then, we need more context - as mclientk also had with his simple SQL statement - assuming 100 million numbersdo not exist in memory is a feasible question, because... they may come from a database and most of the time will, when talking about business relevant numbers.

As such, the question is really hard to answer - efficiency first has to be defined.


Mergesort in batches of 100, then only keep the top 100.

Incidentally, you can scale this in all sorts of directions, including concurrently.


If the data is already in an array that you can modify, you could use a variant of Hoare's Select algorithm, which is (in turn) a variant of Quicksort.

The basic idea is pretty simple. In Quicksort, you partition the array into two pieces, one of items larger than the pivot, and the other of items smaller than the pivot. Then you recursively sort each partition.

In the Select algorithm, you do the partitioning step exactly as before -- but instead of recursively sorting both partitions, you look at which partition contains the elements you want, and recursively select ONLY in that partition. E.g., assuming your 100 million items partition nearly in half, the first several iterations you're going to look only at the upper partition.

Eventually, you're likely to reach a point where the portion you want "bridges" two partitions -- e.g., you have a partition of ~150 numbers, and when you partition that you end up with two pieces of ~75 apiece. At that point, only one minor detail changes: instead of rejecting one partition and continuing work only the other, you accept the upper partition of 75 items, and then continue looking for the top 25 in the lower partition.

If you were doing this in C++, you could do this with std::nth_element (which will normally be implemented approximately as described above). On average, this has linear complexity, which I believe is about as good as you can hope for (absent some preexisting order, I don't see any way to find the top N elements without looking at all the elements).

If the data's not already in an array, and you're (for example) reading the data from a file, you usually want to use a heap. You basically read an item, insert it into the heap, and if the heap is larger than you target (100 items, in this case) you remove one and re-heapify.

What's probably not so obvious (but is actually true) is that you don't normally want to use a max-heap for this task. At first glance, it seems pretty obvious: if you want to get the maximum items you should use a max heap.

It's simpler, however, to think in terms of the items you're "removing" from the heap. A max heap lets you find the one largest item in the heap quickly. It is not, however, optimized for finding the smallest item in the heap.

In this case, we're interested primarily in the smallest item in the heap. In particular, when we read each item in from the file, we want to compare it to the smallest item in the heap. If (and only if) it's larger than the smallest item in the heap, we want to replace that smallest item currently in the heap with the new item. Since that's (by definition) larger than the existing item, we'll then need to sift that into the correct position in the heap.

But note: if the items in the file are randomly ordered, as we read through the file, we fairly quickly reach a point at which most items we read into the file will be smaller than the smallest item in our heap. Since we have easy access to the smallest item in the heap, it's fairly quick and easy to do that comparison, and for smaller items never insert in the heap at all.


By TOP 100, do you mean 100 largest? If so:

SELECT TOP 100 Number FROM RidiculouslyLargeTable ORDER BY Number DESC

Make sure you tell the interviewer that you assume the table is indexed properly.


There's no reason to sort the whole list. This should be doable in O(n) time. In pseudocode:

List top = new List

for each num in entireList
    for i = 0 to top.Length
        if num > top[i] then
            top.InsertBefore(num, i)
            if top.Length > 100 then
                top.Remove(top.Length - 1)
            end if
            exit for
        else
            if i = top.Length - 1 and i < 100 then
                top.Add(num)
            end if
        end if
    next
next


@darius can actually be improved !!!
By "pruning" or deferring the heap-replace operation as required

Suppose we have a=1000 at the top of the heap
It has c,b siblings
We know that c,b>1000

      a=1000
  +-----|-----+
 b>a         c>a




We now read the next number x=1035
Since x>a we should discard a.
Instead we store (x=1035, a=1000) at the root
We do not (yet) bubble down the new value of 1035 
Note that we still know that b,c<a but possibly b,c>x
Now, we get the next number y
when y<a<x then obviously we can discard it 

when y>x>a then we replace x with y (the root now has (y, a=1000))
=> we saved log(m) steps here, since x will never have to bubble down

when a>y>x then we need to bubble down y recursively as required

Worst run time is still O(n log m) 
But average run time i think might be O(n log log m) or something
In any case, it is obviously a faster implementation


Heapify the array in O(n). Then take out top 100 elements.


I store first 100 numbers in Max -Heap of size 100.

  • At last level ,I keep track of minimum number and new number I insert and check with min number.Whether incoming number is candidate for top 100.

    -- Again I call reheapify so I always have max heap of top 100.

    So its complexity is O(nlogn).


int numbers[100000000000] = {...};
int result[100] = {0};
for( int i = 0 ; i < 100000000000 ; i++ )
{
    for( int j = 0 ; j < 100 ; j++ )
    {
         if( numbers[i] > result[j] )
         {
              if( j < 99 )
              {
                  memcpy(result+j+1, result+j, (100-j)*sizeof(int));
              }
              result[j] = numbers[i];
              break;
         }
    }
}


First iteration:

Quicksort, take top 100. O(n log n). Simple, easy to code. Very obvious.

Better? We are working with numbers, do a radix sort (linear time) take the top 100. I would expect this is what the interviewer is looking for.

Any other considerations? Well, a million numbers isn't a lot of memory, but if you want to minimize memory, you keep a max 100 numbers encountered so far and then just scan the numbers. What would be the a best way?

Some have mentioned a heap, but a bit better solution might be a doubly-linked list, where you keep the pointer to the minimum of the top 100 found so far. If you encounter a number a that is bigger than the current smallest in the listed, compared with the next element, and move the number from next to the current until you find a place for the new number. (This is basically just a specialized heap for the situation). With some tuning (if the number is greater the current minimum, compare with current maximum to see which direction to walk list to find insertion point) this would be relatively effective, and would only take like 1.5k of memory.


Suppose mylist is a list of hundred million data. so we can sort the list and take the last hundred data from mylist.

mylist.sort()

mylist[-100:]

Second way:

import heapq

heapq.nlargest(100, mylist)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜