开发者

MySQL - Basic issue with a large table

In my db there are two large tables. The first one (A) has 1.7 million rows, the second one (B): 2.1 millions. Records in A and B have a fairly identical size.

I can do any operation on A. It takes time, but it works. On B, I can't do anything. Even a simple select count(*) just hangs for ever. The problem is I don't see any error: it just hangs (when I show the process list it just says "updating" for ever).

开发者_StackOverflow中文版

It seems weird to me that the small delta (percentage-wise) between 1.7 and 2.1 million could make such a difference (from being able to do everything, to not even be able to do the simplest operation).

Can there be some kind of 2 million rows hard limit?

I am on Linux 2.6+, and I use innoDB.

Thanks! Pierre


It appears it depends more on the amount of data in each row than it does on the total number of rows. If the rows contain little data, then the maximum rows returned will be higher than rows with more data. Check this link for more info:

http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html


The row size (the number of bytes needed to store one row) might be much larger for the second table. Count(*) may require a full table scan - ie reading through the entire table on disk - larger rows mean more I/O and longer time.

The presence/absence of indexes will likely make a difference too.


As I was saying in my initial post, the thing was the two tables were fairly similar, so row size would be fairly close in both tables. That's why I was a bit surprised, and I started to think that maybe, somehow, a 2 million limit was set somewhere.

It turns out my table was corrupted. It is bizarre since I was still able to access some records (using joins with other tables), and mySQL was not "complaining". I found out by doing a CHECK TABLE: it did not return any error, but it crashed mysqld every time...

Anyway, thank you all for your help on this.

Pierre

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜