开发者

Postgres Performance Tips Loading in billions of rows

I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the some of the tables increase. (The largest of which is at around 5 million rows) The speed of the script (inserts) has slowed to a crawl. What was once taking a couple of minutes now takes about an hour.

What can I do to speed this up? Was I wrong in using python and psycopg2 for this task? Is there anything I can do to the database that may speed up this process. I get the feeling I am going about this in entirely the开发者_C百科 wrong way.


Considering the process was fairly efficient before and only now when the dataset grew up it slowed down my guess is it's the indexes. You may try dropping indexes on the table before the import and recreating them after it's done. That should speed things up.


What are the settings for wal_buffers and checkpoint_segments? For large transactions, you have to tweak some settings. Check the manual.

Consider the book PostgreSQL 9.0 High Performance as well, there is much more to tweak than just the database configuration to get high performance.


I'd try to use COPY instead of inserts. This is what backup tools use for fast loading.

Check if all foreign keys from this table do have corresponding index on target table. Or better - drop them temporarily before copying and recreate after.

Increase checkpoint_segments from default 3 (which means3*16MB=48MB) to a much higher number - try for example 32 (512MB). make sure you have enough space for this much additional data.

If you can afford to recreate or restore your database cluster from scratch in case of system crash or power failure then you can start Postgres with "-F" option, which will enable OS write cache.


Take a look at http://pgbulkload.projects.postgresql.org/


There is a list of hints on this topic in the Populating a Database section of the documentation. You might speed up general performance using the hints in Tuning Your PostgreSQL Server as well.

The overhead of checking foreign keys might be growing as the table size increases, which is made worse because you're loading a single record at a time. If you're loading 70GB worth of data, it will be far faster to drop foreign keys during the load, then rebuild them when it's imported. This is particularly true if you're using single INSERT statements. Switching to COPY instead is not a guaranteed improvement either, due to how the pending trigger queue is managed--the issues there are discussed in that first documentation link.

From the psql prompt, you can find the name of the constraint enforcing your foreign key and then drop it using that name like this:

\d tablename
ALTER TABLE tablename DROP CONSTRAINT constraint_name;

When you're done with loading, you can put it back using something like:

ALTER TABLE tablename ADD CONSTRAINT constraint_name FOREIGN KEY (other_table) REFERENCES other_table (join_column);

One useful trick to find out the exact syntax to use for the restore is to do pg_dump --schema-only on your database. The dump from that will show you how to recreate the structure you have right now.


I'd look at the rollback logs. They've got to be getting pretty big if you're doing this in one transaction.

If that's the case, perhaps you can try committing a smaller transaction batch size. Chunk it into smaller blocks of records (1K, 10K, 100K, etc.) and see if that helps.


First 5 mil rows is nothing, difference in inserts should not change is it 100k or 1 mil; 1-2 indexes wont slow it down that much(if fill factor is set 70-90, considering each major import is 1/10 of table ).

python with PSYCOPG2 is quite fast. a small tip, you cud use database extension XML2 to read/work with data

small example from https://dba.stackexchange.com/questions/8172/sql-to-read-xml-from-file-into-postgresql-database

duffymo is right, try to commit in chunks of 10000 inserts (committing only at the end or after each insert is quite expensive) autovacuum might be bloating if you do a lot of deletes and updates, you can turn it off temporary at the start for certain tables. set work_mem and maintenance_work_mem according to your servers available resources ... for inserts, increase wal_buffers, (9.0 and higher its set auto by default -1) if u use version 8 postgresql, you should increase it manually cud also turn fsync off and test wal_sync_method(be cautious changing this may make your database crash unsafe if sudden power-failures or hardware crash occurs)

try to drop foreign keys, disable triggers or set conditions for trigger not to run/skip execution;

use prepared statements for inserts, cast variables

you cud try to insert data into an unlogged table to temporary hold data

are inserts having where conditions or values from a sub-query, functions or such alike?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜