Is SQL equal test fast for large numbers?
I run the very same query on two almost identical database. The only difference is that the first database has ID entries from 1 to 9000 for 2 tables while the other is in the 458231044 and 103511044 range for the 2 same tables. (for the same 9000 entries)
The query compares ID and UNIX time numerous times.
Running it on first database takes barely noticeable time. On the se开发者_开发知识库cond, it takes 30s at the very least.
Is there a chance the problem is caused by the large numbers comparison? If so, how do you fix it? Would comparing string be faster?
I would need more details or maybe some sample schemas to test with, but it sounds more like a (lack of) indexing problem than a data type problem.
Check that you have the same indexes on both databases.
The easy way to test this would be to just try this:
SELECT (1 = 1);
SELECT (9000 = 9000);
SELECT (1234567890 = 1234567890);
If the last one is slower its a comparison issue.
I suspect its not the numeric comparison but rather a side effect of a large primary key or a vaccuum-like operation needing to be performed. Do the other two tables have ranges that high because they were populated and later had rows deleted or did they actually start with PKs that large?
It may be the datatype being used for that particular index. It makes sense that if its a larger datatype that comparisons may take longer.
Another possibility is that the second database isn't set up for indexing the same way the first database is.
精彩评论