Can you scale SQL Server to handle 100's of Terabytes? [closed]
One of my colleagues told me the other day SQL Server wasn't designed to handle Terabytes of data.....that could possibly be true for SQL 2000 or any DB 10 years ago, but I don't believe that to be the case today. How have others approached the situations where they need to store massive amounts of data (100 + Terabytes)? Growing one Single server is probably not the option, but I would think we could partition the data across many smaller servers and use views, etc to allow us to make one query call across the servers. Any idea how concurrency, etc performs in a model like this where data is Horizontally Partitioned across servers?
Any suggestions / comments is greatly appreciated.
Thanks,
S
Whether it's designed to handle that size is a matter of potential debate. If you want the cold hard facts of what is possible then read on.
According to the specification published by Microsoft, SQL Server 2008 R2 (32 and 64 bit versions) has a maximum database size of 524,272 terabytes. This is the same for SQL Server 2008, and SQL 2005 it's 524,258 terabytes.
See, they made an improvement, from 2005 to 2008 you can have an extra 14 Terabytes in your database :)
精彩评论