Concurrent queries in Postgres
I'll try shortly describe what I have to understand purpose of this question. I have large mul开发者_如何学Cti-thread application developed in Java. Each thread has hard logic to load, validate and parse the file and then insert this data into database. The stage of inserts/updates database in thread too. Each thread runs several stored procedures which updates one table. These stored proc is a several for each thread.
This database is used by a web application at the same time. So as you understood I have problem with deadlock. I tried all of lock model but any of them did not help me. In some timeframe SELECT from web works long time.
What is my question after all. Have postgres mechanism/means to synchronize stored procedure or query? I know it must done by means of a lock model, but I've had no success. As a solution for MS Server, I use "WITH NOLOCK" for SELECT statements. I can resolve this by means code synchronize, but it's really complicated logic.
By request I post example of slow query in updating process.
SELECT c.id AS id,cn.name as name,c.parent_category_id AS parent_category_id, CASE WHEN EXISTS ( SELECT p.id FROM category AS subcat INNER JOIN product AS p ON subcat.id = p.category_id WHERE subcat.parent_category_id=c.id LIMIT 1) THEN true ELSE false END AS autoupdate FROM category AS c INNER JOIN category_name AS cn ON c.id=cn.category_id LEFT JOIN category AS subcat ON subcat.parent_category_id = c.id WHERE c.parent_category_id=0AND cn.langid=1 AND c.id != 1 AND EXISTS (SELECT p.id FROM product AS p WHERE p.category_id = c.id OR p.category_id = subcat.id LIMIT 1) GROUP BY c.sort,c.id,cn.name,c.parent_category_id ORDER BY c.sort, cn.name`
And explain:
'Group (cost=947.74..987.96 rows=15 width=40)' ' -> Sort (cost=947.74..947.75 rows=15 width=40)' ' Sort Key: c.sort, cn.name, c.id, c.parent_category_id' ' -> Merge Left Join (cost=125.84..947.62 rows=15 width=40)' ' Merge Cond: (c.id = subcat.parent_category_id)' ' Filter: (SubPlan 2)' ' -> Sort (cost=64.46..64.46 rows=3 width=40)' ' Sort Key: c.id' ' -> Nested Loop (cost=0.00..64.45 rows=3 width=40)' ' -> Seq Scan on category c (cost=0.00..41.69 rows=3 width=18)' ' Filter: ((id 1) AND (parent_category_id = 0))' ' -> Index Scan using category_name_category_id_langid on category_name cn (cost=0.00..7.56 rows=1 width=30)' ' Index Cond: ((cn.category_id = c.id) AND (cn.langid = 1))' ' -> Sort (cost=61.38..62.46 rows=1084 width=16)' ' Sort Key: subcat.parent_category_id' ' -> Seq Scan on category subcat (cost=0.00..39.52 rows=1084 width=16)' ' SubPlan 2' ' -> Limit (cost=0.00..27.29 rows=1 width=8)' ' -> Seq Scan on product p (cost=0.00..36459.98 rows=1336 width=8)' ' Filter: ((category_id = $0) OR (category_id = $1))' ' SubPlan 1' ' -> Limit (cost=0.00..2.68 rows=1 width=8)' ' -> Nested Loop (cost=0.00..17889.28 rows=6684 width=8)' ' -> Seq Scan on category subcat (cost=0.00..40.60 rows=10 width=8)' ' Filter: (parent_category_id = $0)' ' -> Index Scan using product_category_id on product p (cost=0.00..1764.16 rows=668 width=16)' ' Index Cond: (p.category_id = subcat.id)'
Thanks! Best regards Artem
I think the term you're looking for is "isolation level". I'd imagine that some parts of your application might tolerate dirty reads, unrepeatable reads, or phantom reads. But in current versions of PostgreSQL, writers don't block readers, and readers don't block writers.
So my question is this: How do you know the delay is caused by a lock, and not by, say, disk I/O, connection load, other processes (processes not related to the dbms, like a long-running cron job), or something else?
You don't need "WITH NOLOCK" in PostgreSQL as readers will never be blocked by writers. The only way to block a SELECT on a table is manually lock the table with exclusive access - which is not something that is done by normal DML statements. Only an ALTER TABLE will acquire such a lock.
But I'm not sure I understood your question though.
Do you already have a problem with deadlocks? Or do you think you will have them, without testing?
精彩评论