MYSQL large LIMIT offset
i have searched for a solution for this, but alas, nothind.
Currently my table is pushing 5 800 000 rows, from 5000 different sources. E.G:
CREATE TABLE stores(
store_id INT UNSIGNED PRIMARY_KEY AUTO_INCREMENT,
name VARCHAR(255)
);
CREATE TABLE articles(
article_id INT UNSIGNED PRIMARY_KEY AUTO_INCREMENT
article_name CHAR(80),
article_price INT UNSIGNED,
store_id INT UNSIGNED
);
Ok, so i have made indexes开发者_如何转开发 like so:
CREATE INDEX article ON articles(store_id,name);
And i thought i made it bulletproof, guess not.
When i search large datasets, 200 000 articles, im issuing:
SELECT article_name,
article_price
FROM articles
WHERE store_id = $id
ORDER BY article_name
LIMIT 100000,20;
And getting very large reply times. Help please? :S
Yes, well i figured it out, as no one else will help me :D
The idea is as follows, when i reguest some page i do the following:
CREATE TEMPORARY TABLE result(row_id,article_id); SET @row:=0; INSERT INTO result SELECT @row:=(@row+1),article_id FROM articles ORDER BY $sort $order;
and then a simple SELECT/JOIN:
SELECT * FROM result LEFT JOIN articles ON result.article_id = articles.article_id WHERE row_id >= $from AND row_id < $to;
Cheers...
Have you tried two separate indices on store_id and article_name? This may allow for more efficient filtering and ordering by MySQL. You could also consider moving this query into a stored procedure so that MySQL has the ability to cache the execution / sub-result-sets. If none of that works, you may want to consider the number of insert/update/delete operations that are occuring on that table. If they are high, you could consider replication to a slave db where all read operations occur. If none of that works, you may want to upgrade your hardware as MySQL should be able to handle tables of this size quite easily.
精彩评论