开发者

Handling a Huge Data of Record in 1 table

I would like to ask couple question how to handle a huge 100 million of data in 1 single table.

The table will perform INSERT, SELECT & UPDATE.

I have got some advise that to Index the table and Archive the table into couple table.

Any other suggestion that can help to tweak the SQL Performance.

Case: SQL Server 2008. Most of the time the update regarding decimal value, and status of tiny int.

The INSERT statement will not using BULK INSERT since I'm assuming that per min that there'r a lot of users let s开发者_如何学Caid 10000-500000 performing INSERT statement and Update the table.


You should consider what kind of columns you have.

The more nvarchar/text/etc columns you have included in the different indexes, the slower the index will be.

Also what RDBMS are you going to use? You have different options based on SQL Server, Oracle and MySQL...

But the crucial thing is differently to build the right index's that you would use...

One other thing, you could use BULK INSERT on SQL Server to speed up the inserts.

But ask away, i have dealt with databases being populated with 70 mill data rows pr day ;)

EDIT ----- After more information has come

I'll try to take a little other approach to the case and compare it to data scraping.

There are no doubt that INSERTs are faster than UPDATEs. And you might want to make a table that acts as a "collect" table. What I mean is that it only get inserts all the time. No updates, all is handle with inserts.

Then you use a trigger/event/scheduler to handle what has come into that table and populate what you need to another(s) table(s).

This way you will be able to apply a little business logic to the "cleanup" (update) and keep the performance on the DB Server and not hold up a connection while these things are done.

This of course also have something to do with what the "final" data are to be used for...

\T


Clearly SQL 2008 is capable of 100 million records but a lot of details to look at that just do not come into play at 100 thousand. Pick a good primary key. Fill factor. Other indexes (will slow down insert but speed select). Concurrency (locking). If you can accept dirty reads then that will help performance. This question needs a lot more detail. You need to post the table design and your select, update, and insert TSQL statements. I did not vote your question down but if you don't provide more detail it will get voted down.

For insert be aware you can insert multiple rows at once and is much faster than multiple insert statements if BULK INSERT is not an option.

    INSERT INTO Production.UnitMeasure
    VALUES (N'FT2', N'Square Feet ', '20080923'), (N'Y', N'Yards', '20080923'), (N'Y3', N'Cubic Yards', '20080923');
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜