开发者

Can you scale SQL Server to handle 100's of Terabytes? [closed]

As it currently stands, this question is not a good fit for开发者_JS百科 our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 11 years ago.

One of my colleagues told me the other day SQL Server wasn't designed to handle Terabytes of data.....that could possibly be true for SQL 2000 or any DB 10 years ago, but I don't believe that to be the case today. How have others approached the situations where they need to store massive amounts of data (100 + Terabytes)? Growing one Single server is probably not the option, but I would think we could partition the data across many smaller servers and use views, etc to allow us to make one query call across the servers. Any idea how concurrency, etc performs in a model like this where data is Horizontally Partitioned across servers?

Any suggestions / comments is greatly appreciated.

Thanks,

S


Whether it's designed to handle that size is a matter of potential debate. If you want the cold hard facts of what is possible then read on.

According to the specification published by Microsoft, SQL Server 2008 R2 (32 and 64 bit versions) has a maximum database size of 524,272 terabytes. This is the same for SQL Server 2008, and SQL 2005 it's 524,258 terabytes.

See, they made an improvement, from 2005 to 2008 you can have an extra 14 Terabytes in your database :)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜