开发者

psql dumping a table and memory usage

Hello I would like to dump a postgres table to csv. The database is local to the (Linux) machine开发者_如何学JAVA on which I am working.

psql dev -U myusername -c "select * from control.mytable;" > mydata.csv

mytable is pretty large - around 120 million rows. I am worried that this will take up way too much memory, and when I monitor with top the memory usage seems to keep increasing, and so I kill it when it reaches a certain point. Does anyone know if this is really taking up a terrible amount of memory? I know that the usage reported by top can be somewhat misleading to the uninitiated, such as myself. Are there any alternatives that won't take up so much memory?


Why don't you use COPY to create the csv? COPY does a much better job.


If you do it like this, the entire result set will be built up in the client. So yes, it really is using a lot of memory. If you want to alleviate that, use a cursor to fetch the result in batches. Or use COPY, which already has a CSV mode built in.


If you use COPY you have to goto stdout unless you run it as postgres user, then you can go directly to a file.

psql -U myusername dev -c "COPY  (select * from control.mytable) TO STDOUT WITH CSV HEADER;" > mydata.csv
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜