Best Practice for loading non-existent data
I'm trying to build a table in MS SQL 2008, loaded with roughly 50,000 rows of data. Right now I'm doing something like:
Create Table MyCustomData
(
ColumnKey Int Null,
Column1 NVarChar(100) Null,
Column2 NVarChar(100) Null
Primary Key Clustered
(
ColumnKey ASC
)
WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON
)
)
CREATE INDEX IDX_COLUMN1 ON MyCustomData([COLUMN1])
CREATE INDEX IDX_COLUMN2 ON MyCustomData([COLUMN2])
DECLARE @MyCount Int
SET @MyCount = 0
开发者_如何学PythonWHILE @MyCount < 50000
BEGIN
INSERT INTO MyCustomData
(ColumnKey, Column1, Column2)
Select @MyCount + 1, 'Custom Data 1', 'Custom Data 2'
Set @MyCount = @MyCount + 1
END
My problem is that this is crazy slow. I thought at one point I could create a Select Statement to build my custom data and use that as the datasource for my Insert Into statement.
i.e. something like
INSERT INTO MyCustomData
(ColumnKey, Column1, Column2)
From (Select Top 50000 Row_Count(), 'Custom Data 1', 'Custom Data 2')
I know this doesn't work, but its the only thing I can show that seems to provide an example of what I'm after. Any suggestions would be greatly appriciated.
Well, if I wanted 50000 rows with constant data and a counter, I'd go
select row_number() over(order by t1.[number]), 'Custom data 1', 'Custom data 2'
from
master..spt_values as t1
cross join (select [number] from master..spt_values where [type] = 'P' and [number] between 1 and 50) as t2
where
[type] = 'P' and t1.[number] between 1 and 1000
http://www.mysqlperformanceblog.com/2011/02/01/sample-datasets-for-benchmarking-and-testing/
some sample datasets for testing.
Another option would be to create your data in a csv file using a language of your choice then import that into the SQL Server which would be much faster than your current method.
精彩评论