SQL Server: How can a table scan be so expensive on a tiny table?
I'm looking at an execution plan from a troublesome query.
I can see that 45% of the plan is taken up doing a table scan on a table with seven (7) rows of data.
I am about to put a clustered index to cover the columns in my query on a table with seven rows and it feels...wrong. How can this part of my query take up so much of the plan given the table is so tiny?
I was reading up here and it feel it might just be becuase of non-contiguous data - there are no indexes at all on the table in question. Overall though our database is large-ish (7GB) and busy.
I'd love to know what others think - thanks!
EDIT:
The query is run very frequently and was involved in deadlock (and chosen as the victim). Right now it's taking between 300ms and 500ms to run, but will take longer when the database is busier.
The query:
select l.team1Score, l.team2Score, ls.team1ExternalID, ls.team2ExternalID, et.eventCategoryID, e.eventID, ls.statusCode
from livescoretracking l(nolock)
inner join liveScores ls (nolock) on l.liveScoreID = ls.liveScoreID
inner join db1.dbo.events e on e.gameid = ls.gameid
inner join db1.dbo.eventtype et (nolock) on e.eventTypeID = et.eventTypeID
inner join eventCategoryPayTypeMappings ecb (nolock) on ( et.eventCategoryID = ecb.eventCategoryID and e.payTypeID = e开发者_运维技巧cb.payTypeID and ecb.mainEvent = 1 )
where ls.gameID = 286711 order by l.dateinserted
The problem table is the eventCategoryPayTypeMappings table - thanks!
A percentage cost is meaningless without knowing the total cost in real terms. e.g. if the query takes 1 ms to execute a 45% cost for a table scan is .45 of a milisecond which is not worth trying to optimise, if the query takes 10 seconds to execute then the 45% cost is significant and worth optimising.
A table scan on a seven row table is not expensive. Barring query hints, the query engine will use a table scan on such a small table no matter what indexes exist. Can you show us more about the query in question and the problem with the execution plan?
If there are no indexes on the table, the query engine will always have to do a table scan. There's no other way it can process the data.
Many RDBMS platforms will do a table scan on a table that small even if there are indexes. (I'm not sure about SQL Server specifically.)
I would be more concerned about the actual numbers in the query plan.
Deadlocks are usually more indicative of a resource access ordering issue than a problem with query design in particular. I would look at the other participant(s) in the deadlock and take a look at what objects each transaction had locked that were required by the other(s). If you can reorder to ensure consistent access order you may be able to avoid contention issues entirely.
It really depends how long the query takes from start to finish. 45% doesn't mean its taking a long time if the query is only taking say 10ms. All it really says is most of the time is spent doing the table scan which is understandable.
Having an index may help when the table grows and is probably not a bad idea unless you know this table is not going to grow. However you will find that adding an index to a table with 7 records makes little to no difference to performance.
A table scan on a small table is not a bad thing - If it fits in a single read into the cache the optimizer will calculate that a table scan costs less than reading through an index chain.
I would only recommend a clustered index if you want to help insure that the contents will 'tend' to be sorted that way (though you will need an explicit order by to guarantee that).
精彩评论