UBUNTU 11.04 + PHP + POSTGRESQL Enhance performance
I'm on this machine:
intel core 2 duo e8400 @3GHZ 4GB ram ddr2
php 5.3.6 pgsql 9.1
I'm running a php script that takes like 5 minutes on a mac with similar specs. This php script开发者_如何学运维, essentially, recreate a db importing some data into it.
On this computer it runs in more than 20 minutes.
The weird thing is the use of the CPU from both PHP & POSTGRESQL
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8408 postgres 20 0 2188m 44m 40m D 4 1.1 0:20.71 postgres
8407 gianps 20 0 380m 225m 6620 S 2 5.7 0:11.78 php
top - 16:08:32 up 3:35, 3 users, load average: 1.26, 1.15, 0.80
Tasks: 187 total, 1 running, 185 sleeping, 0 stopped, 1 zombie
Cpu(s): 4.8%us, 2.7%sy, 0.2%ni, 87.0%id, 5.1%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 4056572k total, 2541972k used, 1514600k free, 117772k buffers
Swap: 3905532k total, 0k used, 3905532k free, 902048k cached
I setup the php (both cli and apache) to use as much ram as they need (memory limit -1) and tuned postgres to use:
shared_buffers = 2GB
effective_cache_size = 3072MBAny suggestion to let this script use more ram & more cpu and run faster?
thanks
update: after some investigation i found that that setting synchronous commit (in this situation) makes my script 10x faster.
set synchronous_commit to off;
since is not safe to make this option a default, i just switch it to off when needed.. to understand what synchronous commit does documentation
Importing data requires writing it to disk, so the duration of the process is likely given by the performance of the local storage system. If your Mac has a flashy SSD and the other box an IDE disk, the latter may easily be a lot slower. Use iostat to visualize disk throughput on both systems.
Another big performance factor for inserting/writing data is commit size, try inserting a lot of rows at once and then issue a commit only every few thousand rows. Or use the even faster "COPY FROM STDIN" method (this is proprietary to Postgresql).
精彩评论