开发者

MySQL LOAD DATA INFILE slows down after initial insert using raw sql in django

开发者_StackOverflow社区I'm using the following custom handler for doing bulk insert using raw sql in django with a MySQLdb backend with innodb tables:

def handle_ttam_file_for(f, subject_pi):
    import datetime
    write_start = datetime.datetime.now()

    print "write to disk start: ", write_start
    destination = open('temp.ttam', 'wb+')
    for chunk in f.chunks():
        destination.write(chunk)
    destination.close()
    print "write to disk end", (datetime.datetime.now() - write_start)

    subject = Subject.objects.get(id=subject_pi)

    def my_custom_sql():
        from django.db import connection, transaction
        cursor = connection.cursor()

        statement = "DELETE FROM ttam_genotypeentry WHERE subject_id=%i;" % subject.pk
        del_start = datetime.datetime.now()
        print "delete start: ", del_start
        cursor.execute(statement)
        print "delete end", (datetime.datetime.now() - del_start)

        statement = "LOAD DATA LOCAL INFILE 'temp.ttam' INTO TABLE ttam_genotypeentry IGNORE 15 LINES (snp_id, @dummy1, @dummy2, genotype) SET subject_id=%i;" % subject.pk

        ins_start = datetime.datetime.now()
        print "insert start: ", ins_start
        cursor.execute(statement)
        print "insert end", (datetime.datetime.now() - ins_start)
        transaction.commit_unless_managed()

    my_custom_sql()

The uploaded file has 500k rows and is ~ 15M in size.

The load times seem to get progressively longer as files are added.

Insert times:
1st:    30m
2nd:    50m
3rd:    1h20m
4th:    1h30m
5th:    1h35m

I was wondering if it is normal for load times to get longer as files of constant size (# rows) are added and if there is anyway to improve performance of bulk inserts.


I found the main issue with bulk inserting to my innodb table was a mysql innodb setting I had overlooked.

The setting for innodb_buffer_pool_size is default 8M for my version of mysql and causing a huge slow down as my table size grew.

innodb-performance-optimization-basics

choosing-innodb_buffer_pool_size

The recommended size according to the articles is 70 to 80 percent of the memory if using a dedicated mysql server. After increasing the buffer pool size, my inserts went from an hour+ to less than 10 minutes with no other changes.

Another change I was able to make was getting ride of the LOCAL argument in the LOAD DATA statement (thanks @f00). My problem before was that i kept getting file not found, or cannot get stat errors when trying to have mysql access the file django uploaded.

Turns out this is related to using ubuntu and this bug.

  1. Pick a directory from which mysqld should be allowed to load files. Perhaps somewhere writable only by your DBA account and readable only by members of group mysql?

  2. sudo aa-complain /usr/sbin/mysqld

  3. Try to load a file from your designated loading directory: 'load data infile '/var/opt/mysql-load/import.csv' into table ...'

  4. sudo aa-logprof aa-logprof will identify the access violation triggered by the 'load data infile ...' query, and interactively walk you through allowing access in the future. You probably want to choose Glob from the menu, so that you end up with read access to '/var/opt/mysql-load/*'. Once you have selected the right (glob) pattern, choose Allow from the menu to finish up. (N.B. Do not enable the repository when prompted to do so the first time you run aa-logprof, unless you really understand the whole apparmor process.)

  5. sudo aa-enforce /usr/sbin/mysqld

  6. Try to load your file again. It should work this time.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜