How do I speed up a SSIS Transfer Server Objects task that runs really slow?
Within SSIS 2005 I used the Import/Export wizard to create a package that drops/recreates and replaces the data on some tables between my production server and developm开发者_StackOverflow社区ent machine. The control flow that was created by the wizard was extremely complicated so I created a new package and used the "Transfer SQL Server Objects Task" which is really easy to configure and setup as opposed to the crazy thing the wizard created. The problem is that the package that I created takes over 3 minutes to run while the wizard version takes about 20 seconds. They are basically doing the same thing, why such a difference in execution time and is there a setting that I can change in the package that is using the Transfer Objects task to make it run quicker?
Here is the package that the wizard created. I have created similiar packages before using the wizard that I had no problem editing, but I never saw anything like this before. I cannot figure out where to modify the tables and schema that I drop and create.alt text http://www.freeimagehosting.net/uploads/f7323b2ce3.png
Here is the properties of the transfer task inside that for loop container
alt text http://www.freeimagehosting.net/uploads/6f0dfc8269.png
What connection type are you using?
Here when I've been wanting to transfer between Oracle and SQL, the ADO.NET provider is miles slower than the Oracle OLE DB provider.
Why not use the wizard generated package and figure out what it does? It is obviously doing things very efficiently.
Could be quite a number of things. Are you doing lookups? If so, use joins instead. You can also run a db profile to see what the crazy package does opposed to your custom package.
I don't use the wizard, but could it have created a stored procedure that will actually do the work? That would explain how it is going faster, since the stored procedure can do all the work within the database.
I am curious what is within TransferTask
, as that seems to be where all the work is done.
You could look at exporting the data to a flat file, then using a Bulk Import to do this faster.
For some more thoughts about how fast things go look at here, but most important is some of the comments that were given, such as how he used Bulk Insert wrong.
http://weblogs.sqlteam.com/mladenp/articles/10631.aspx
UPDATE: You may want to also look at this: http://blogs.lessthandot.com/index.php/DataMgmt/DBAdmin/title-12 as, toward the end, he shows how long his tests took, but the first comment may be the most useful part, for speeding your import up.
This class of performance problem usually stems from "commit" levels and logging.
The illustrated wizard generated task does a "start transaction" before entering the loop and commits after all the data is transferred. Which is the best thing to do if the table is not 'enormous'.
Have you left 'autocommit" on in your hand coded version?
- Use the Fast Parse option on integer and date columns imports if not locale specific
- Use the SQL Server Native Client 10.x OLE DB provider for an In-Memory, high performance connection, or consider using Attunity Drivers or SQL Server <---> Oracle
- Set the IsSorted property on the output of an upstream data flow component to True.
- Select the OLE DB Destination Data Access mode “Table or View – fast load”
- Run tasks in parallel do not add unneeded precedence constraints
- Avoid using select * in data flow task
- ([RunInOptimizedMode] property). Optimized mode improves performance by removing unused columns, outputs, and components from the data flow.
- Un-Check Constraints box
- Set the network packet size to 32k instead of the default 4k
- Drop indexes on truncated/reload tables, and consider using truncate if using delete *
- If tables change slightly consider using Merge
- Consider using a dynamic index rebuild SP like the famous one listed here:
- Load test it using UAT with SQL Server Profiler set to filter on application "ssis-%"
- The default buffer size is 10 megabytes, with a maximum buffer size of 100 megabytes.
- Seperate MDF/LDFs as well as TempDB & Defragment Disks
- Find bottleneck in your Database by using DMVs
- Change to RAID 10 or 0 from RAID 5 or other
精彩评论