开发者

Acces passthrough query for large sql table

I have a large table in a mysql database (6 million records). If I link the table, I can open it just fine--it seems like access requests the records as needed. However, if I use a passthrough query, it seems like access requests the entire t开发者_StackOverflow中文版able before opening it. This is slow enough on my normal tables (200,000) records, but the big ones are impossible. I want to use a passthrough to be able to use sql statements, but I need to make it faster. Is there a way to do this? Thanks!

EDIT: Here is the query; you can't get much simpler than this. SELECT * FROM Traffic12


Your query is asking for the ENTIRE table. Access is doing exactly what you're telling it to do. The only times to use a pass-through query are when you want the WHERE clause (the filtering) to be executed on the server or the joins to be made on the server or when you are taking advantage of server-side functionality (such as a UDF) or when you want to add "hinting" that the back-end server will understand.

The apparent performance benefit you notice when not using a p-t query comes from Access fetching only a certain number of rows at a time; but you're bypassing this optimization from the paging algorithm when you use pass-thru.

When the filtering is done server-side a pass-thru query can be much more parsimonious with bandwidth than non-p-t and can therefore yield large performance gains relative to non-pass-thru. Or sometimes the back-end is a humongous 4-CPU machine with immense amounts of RAM and can churn through large indexes in a flash. You have to assess the situation to see what approach is better taking all factors into account.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜