Cascade delete performance drop on bigger datasets, can this be caused by lack of indexing?
I'm writing some code that has to cascade delete records in a certain database, and I noticed a drop in performance as the database has more records in it. When I just fill the database it doesn't seem like there is a big drop in performance between the start of the filling and right at the end, yet when I do a cascade delete performance goes down with a bigger database. I'm assuming that it would need to make a lot of joins for the cascade to find all the related records in other tables, which causes it to slow down on bigger datasets. But when I just add a record, wouldn't it also have to test for already existing primary keys and other开发者_如何学运维 unique constraints and wouldn't this also have to be slower in bigger datasets, or is this so incredibly fast compared to the delete process that it's hard to notice a performace drop when you're just filling a database? Or are the cascades just slow because I didn't specifically index the tables that it cascades to?
So secondly, would indexing the tables it cascades to speed up the cascading if those tables already have a generated id as primary key? In a more general sense: are primary keys automatically indexed?
I'm assuming that it would need to make a lot of joins for the cascade to find all the related records in other tables, which causes it to slow down on bigger datasets.
Don't assume. Turn up Hibernate's logging (specifically the logger for org.hibernate.SQL
) to see exactly which SQL statements Hibernate executes. Then make decisions and take actions from facts, instead of assumptions.
In a more general sense: are primary keys automatically indexed?
Yes.
精彩评论