Slow performance when doing SQL join on lots of split results
I have about thirty strings that get passed to my stored procedure which act as the columns of a table I need to manipulate. Each string is delimited and I use a split function to break each one of these strings apart into it's appropriate rows which usually consist of twenty开发者_开发技巧 or fewer rows. The split function is quite fast and return a table result with one column being an ID which is the primary key and the other column being the split out value, and from examining the execution plan the thirty splits only take up a fraction of my time. The part that seems to kill my performance is the actual join among those thirty split results sets.
DECLARE @WorkingTab TABLE ([ID] INT PRIMARY KEY, [Col1] VARCHAR(255), [Col2] VARCHAR(255), ...)
INSERT INTO @WorkingTab ([ID], [Col1], [Col2], ...)
SELECT
splitStr1.ID,
splitStr1.VALUE,
splitStr2.VALUE,
.
.
.
FROM
dbo.Split(@Str1, '~') splitStr1
LEFT JOIN
dbo.Split(@Str2, '~') splitStr2
ON splitStr1.ID = splitStr2.ID
LEFT JOIN
dbo.Split(@Str3, '~') splitStr3
.
.
.
I've tried using inner joins instead of left join which decreased performance slightly. I've tried inserting just the first split result into an indexed temp table then updating the rest of the column values into the appropriate column in the temp table which again resulted in a minor performance decrease. I've tried tried inserting all of the results into a persistent table which also didn't improve performance at all. If anyone has any suggestions on other ways to approach this or just general performance tips, I'm all ears. Thanks in advance.
This is really not a good strategy, performance-wise. User defined functions will never be high performers. Have you considered submitting your data as an XML document and parsing it in a stored procedure? I've done that before and it's usually a decent performer.
In regards to your actual question, there may be some ability to optimize your table valued function so that it returns a table variable with a primary key defined, which could improve performance, but really, I recommend changing your upload strategy.
The joins will be quite inefficient for thirty table variables as it has to scan each one for every join. You need to get it to an operation that behaves more like O(N) or O(N log N) - practically, I think this means getting all the outputs sorted and recognized as being sorted by the query optimiser. I can't think of a way that is simple, clean and compact to to this. However, one of the following approaches might work:
Load the output from each split function into a set of temporary tables and create a clustered index on ID for each table. This might get a reasonably efficient merge join plan and the clustered index builds will be O(N log N) with an effectively O(N) merge join operation.
Open cursors on each table sorted in order of ID and loop through them all procedurally (i.e. get one row from each of the cursors for every loop iteration). If the row counts on the output of the functions are reasonably small than this might not be too expensive. The cursor ops will be somewhat expensive but the query is effectively O(N log N) and the looping operation is O(N).
A better plan might be to do the whole thing procedurally (i.e. avoid the table valued split function). If the strings can all be sorted by ID as you get them (or possibly as a preliminary step) you can just do the whole thing with string operations. Peel off the first record from each string and store the tail of the string for the next iteration.
精彩评论