开发者

how to implement a really efficient bitvector sorting in python

Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here

What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following:

from BitVector import BitVector
import timeit
import random
import time
import sys

def sort(input_li):
        return sorted(input_li)

def vec_sort(input_li):
        bv = BitVector( size = len(input_li) )
        for i in input_li:
                bv[i] = 1

        res_li = []
        for i in range(len(bv)):
                if bv[i]:
                        res_li.append(i)

        return res_li

if __name__ == "__main__":
        test_data = range(int(sys.argv[1]))
        print 'test_data size is:', sys.argv[1]
        random.shuffle(test_data)

        start = time.time()
        sort(test_data)
        elapsed = (time.time() - start)
        print "sort function takes " + str(elapsed)
        start = time.time()
        vec_sort(test_data)
        elapsed = (time.time() - start)
        print "sort function takes " + str(elapsed)
        start = time.time()
        vec_sort(test_data)
        elapsed = (time.time() - start)
        print "vec_sort function takes " + str(elapsed)

I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), th开发者_C百科e result is as following:


  • test_data size is: 1000
  • sort function takes 0.000274896621704
  • vec_sort function takes 0.00383687019348

  • test_data size is: 10000

  • sort function takes 0.00380706787109
  • vec_sort function takes 0.0371489524841

  • test_data size is: 100000

  • sort function takes 0.0520560741425
  • vec_sort function takes 0.374383926392

  • test_data size is: 1000000

  • sort function takes 0.867373943329
  • vec_sort function takes 3.80475401878

  • test_data size is: 10000000

  • sort function takes 12.9204008579
  • vec_sort function takes 38.8053860664

What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?


As Niki pointed out, you are comparing a very fast C routine with a Python one. Using psyco speeds it up a little bit for me, but you can really speed it up by using a bit vector module written in C. I used bitarray and then the bit sorting method surpasses the built-in sort for an array size of about 250,000 using psyco.

Here's the function that I used:

def vec_sort2(input_li):
    bv = bitarray(len(input_li))
    bv.setall(0)
    for i in input_li:
        bv[i] = 1

    return [i for i in xrange(len(bv)) if bv[i]]

Notice also that I have used a list comprehension to construct the sorted list which helps a bit. Using psyco and the above function with your functions I get the following results:

test_data size is: 1000000
sort function takes 1.29699993134
vec_sort function takes 3.5150001049
vec_sort2 function takes 0.953999996185

As a side note, BitVector isn't especially optimized even for Python. Before I found bitarray, I did some various tweaks to the module and using my module that has the tweaks, the time for vec_sort is reduced over a second for this size of an array. I haven't submitted my changes to it though because bitarray is just so much faster.


My Python isn't the best but it looks like you have a bug in your code:

bv = BitVector( size = len(input_li) )

The size of your bitvector is the same as the size of your input array. You want the bitvector to be the size of your domain - 10^10. I'm not sure how Python's bitvectors deal with overflows, but if it automatically resizes the bitvector then you are getting quadratic behavior.

Additionally I imagine that Python's sort function is implemented in C and is not going to have the overhead of a sort implemented purely in Python. However that probably wouldn't cause an O(nlogn) algorithm to run substantially faster than an O(n) algorithm.

Edit: also this sort will only work on large data sets. Your algorithm runs in O(n + 10^10) time (based on your tests I assume you know this) which will be worse than O(nlogn) for small inputs.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜