Why is this query on a MEMORY table slower then its twin on InnoDB?
I have an InnoDB and MEMORY version of a table. Both have the same indexes and the same 30,000 rows of data. There's a particular query that runs very slow when run executed against the MEMORY table.
Here's against InnoDB:
SELECT emails.id
FROM emails
LEFT JOIN custom_data_person pd1 ON (pd1.person_id = emails.person_id)
WHERE pd1.field_id = 13
2928 rows in set (0.24 sec)
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: pd1
type: ref
possible_keys: person_id,field_id
key: field_id
key_len: 5
ref: const
rows: 20240
Extra: Using where; Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: emails
type: ref
possible_keys: person_id
key: person_id
key_len: 4
ref: test.pd1.person_id
rows: 1
Extra: Using index
Here's MEMORY:
开发者_开发问答SELECT emails.id
FROM emails_memory AS emails
LEFT JOIN custom_data_person pd1 ON (pd1.person_id = emails.person_id)
WHERE pd1.field_id = 13
2928 rows in set (1.40 sec)
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: emails
type: ALL
possible_keys: person_id
key: NULL
key_len: NULL
ref: NULL
rows: 30000
Extra:
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: pd1
type: ref
possible_keys: person_id,field_id
key: person_id
key_len: 10
ref: test.emails.person_id,const
rows: 1
Extra: Using where; Using index
(Note that the person_id index is a BTREE index. I tried the same with the default HASH index and the results were the same.)
So it appears MySQL optimized the second query in a different way which made it perform worse. Why is that? Can I "fix" it?
You don't really care. With a tiny table with 30,000 rows, anything is going to be very fast, even a table scan.
However, it looks like it's chosen a different explain plan. In the innodb case, it's used the custom_data_person table first, and used a covering index. It then queries the emails table for each row found in the custom_data_person table. This seems like the sane explain plan.
It's possibly some bad pessimisation in the optimiser for memory tables.
I'd steer clear of memory tables. If you want a table which behaves almost like a memory table, use a MyISAM table and truncate it on server startup. Memory tables suck incredibly badly because they store varchars padded to maximum length, so they usually use a lot more memory than another type of table. MyISAM uses storage very efficiently. Alternatively, use InnoDB tables for everything.
Unfortunately innodb doesn't provide a way to set durability on a per-table basis, so if the fsync on each transaction bothers you, you have to do bigger (and hence fewer) transactions.
Using several engines is a compromise, as there is rarely any way the server can automatically divide up its (finite) ram between the engines. So you usually want to use just one engine; this includes the memory engine which will happily take away heaps of memory from your innodb (hence make it slower as it can fit less of your db in memory) if you configure it that way.
Seriously though, you really, really, really don't care for 30k rows. 30k rows can fit in the smallest memory even if they're vast. Any engine is a memory engine when you use 30k rows.
精彩评论