开发者

Indexing a column used to ORDER BY with a constraint in PostgreSQL

I've got a modest table of about 10k rows that is often sorted by a column called 'name'. So, I added an index on this column. Now selects on it are fast:

EXPLAIN ANALYZE SELECT * FROM crm_venue ORDER BY name ASC LIMIT 10;
  ...query plan...
 Limit  (cost=0.00..1.22 rows=10 width=154) (actual time=0.029..0.065 rows=10 loops=1)
   ->  Index Scan using crm_venue_name on crm_venue  (cost=0.00..1317.73 rows=10768     width=154) (actual time=0.026..0.050 rows=10 loops=1)
 Total runtime: 0.130 ms

If I increase the LIMIT to 60 (which is roughly what I use in the application) the total runtime doesn't increase much further.

Since I'm using a "logical delete pattern" on this table I only consider entries where the delete_date NULL. So this is a common select I make:

SELECT * FROM crm_venue WHERE delete_date IS NULL ORDER BY name ASC LIMIT 10;

To make that query snappy as well I put an index on the name column with a constraint like this:

CREATE INDEX name_delete_date_null ON crm_venue (name) WHERE delete_date IS NULL;

Now it's fast to do the ordering with the logical delete constraint:

EXPLAIN ANALYZE SELECT * FROM crm_venue WHERE delete_date IS NULL ORDER BY name ASC LIMIT 10;
 Limit  (cost=0.00..84.93 rows=10 width=154) (actual time=0.020..0.039 rows=10 loops=1)
   ->  Index Scan using name_delete_date_null on crm_venue  (cost=0.00..458.62 rows=54 width=154) (actual time=0.018..0.033 rows=10 loops=1)
 Total runtime: 0.076 ms

Awesome! But this is were I get myself into trouble. The application rarely calls for the first 10 rows. So, let's select some more rows:

EXPLAIN ANALYZE SELECT * FROM crm_venue WHERE delete_date IS NULL ORDER BY name ASC LIMIT 20;

 Limit  (cost=135.81..135.86 rows=20 width=154) (actual time=18.171..18.189 rows=20 loops=1)
   ->  Sort  (cost=135.81..135.94 rows=54 width=154) (actual time=18.168..18.173 rows=20 loops=1)
     Sort Key: name
 开发者_开发知识库    Sort Method:  top-N heapsort  Memory: 21kB
     ->  Bitmap Heap Scan on crm_venue  (cost=4.67..134.37 rows=54 width=154) (actual time=2.355..8.126 rows=10768 loops=1)
           Recheck Cond: (delete_date IS NULL)
           ->  Bitmap Index Scan on crm_venue_delete_date_null_idx  (cost=0.00..4.66 rows=54 width=0) (actual time=2.270..2.270 rows=10768 loops=1)
                 Index Cond: (delete_date IS NULL)
 Total runtime: 18.278 ms

As you can see it goes from 0.1 ms to 18!!

Clearly what happens is that there's a point where the ordering can no longer use the index to run the sort. I noticed that as I increase the LIMIT number from 20 to higher numbers it always takes around 20-25 ms.

Am I doing it wrong or is this a limitation of PostgreSQL? What is best way to set up indexes for this type of queries?


My guess would be that since, logically, the index is comprised of pointers to a set of rows on a set of data pages. if you fetch a page that is known to ONLY have "deleted" records on it, it doesn't have to recheck the page once it is fetched to only fetch the records that are deleted.

Therefore, it may be that when you do LIMIT 10 and order by the name, the first 10 that come back from the index are all on a data page (or pages) that are comprised only of deleted records. Since it knows that these pages are homogeneous, then it doesn't have to recheck them once it's fetched them from disk. Once you increase to LIMIT 20, at least one of the first 20 is on a mixed page with non-deleted records. This would then force the executor to recheck each record since it can't fetch data pages in less than 1 page increments from either the disk or the cache.

As an experiment, if you can create an index (delete_date, name) and issue the command CLUSTER crm_venue ON where the index is your new index. This should rebuild the table in the sort order of delete_date then name. Just to be super-sure, you should then issue a REINDEX TABLE crm_venue. Now try the query again. Since all the NOT NULLs will be clustered together on disk, it may work faster with the larger LIMIT values.

Of course, this is all just off-the-cuff theory, so YMMV...


As you increase the number of rows, the index cardinality changes. I am not sure, but it could be that because it is using a greater number of rows from the table, it will need to read enough table blocks that those plus the index blocks are enough to make the index no longer make sense to use. This may be a miscalculation by the planner. Also your name (the field indexed) is not the field limiting the scope of the index which may be wreaking havoc with the planner math.

Things to try: Increase the percentage of the table considered when building your statistics, your data may be skewed in such a way that the statistics are not picking up a true representative sample.

Index all rows, not just the NULL rows, see which is better. you could even try indexing where NOT NULL.

Cluster based on an index on that field to reduce the data blocks required and turn it into a range scan.

Nulls and indexes do not always play nice. Try another way:

alter table crm_venue add column char delete_flag;
update crm_venue set delete flag='Y' where delete_date is not null;
update crm_venue set delete flag='N' where delete_date is null;
create index deleted_venue (delete_flag) where delete_flag = 'N';
SELECT * FROM crm_venue WHERE delete__flag='Y' ORDER BY name ASC LIMIT 20;
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜