开发者

Transactional Replication For Write Heavy Medium Sized Database

We have a decent sized, write-heavy database that is about 426 GB (including indexes) and about 300 million rows . We currently collect location data from devices that report to o开发者_开发知识库ur server every couple of minutes, and we serve about 10,000 devices - so lots of writes every second. The location table that stores the location of each device has about 223 million rows. The data is currently archived by year.

Problems occur when users run large reports on this database, the whole database grinds down almost to a stop.

I understand I need a reporting database, but my question is if anyone has experience of using SQL Server Transactional Replication on a database of equivalent size, and their experience of using this technology?

My rough plan is to point all the reports in our application to the Reporting Database, use Transactional Replication to replicate the data over from the master to the slave (Reporting Database).

Anyone have any thoughts on this strategy and the problems I may encounter?

Many thanks!


Transactional replication should work well in this scenario (the only effect the size of the database will have is the time taken to generate the initial snapshot). However, it may not solve your problem.

I think the issue you'll have if you choose transactional replication is that the slave server is going to be under the same load as the master machine as changes are applied - it will still crawl when users run large reports (assuming it's of a similar spec).

Depending on the acceptable latency of reporting data to the live data, this may or may not be OK for your users.

If some latency is acceptable you may get better performance from log shipping, since changes are applied in batches.

Before acquiring a reporting server, another approach would be to investigate the queries that your users are running and look at modifying either their code or the indexing strategy to better match what they're trying to do.


Transactional Replication could work well for you. The things to consider:

  1. The target database tables must be read-only.
  2. The server containing the target database should be stout enough to handle the SELECT traffic from the reporting applications.
  3. Depending on the INSERT/UPDATE traffic, you may need to have a third server act as the Distribution server.
  4. You also have to consider the size of the Distribution database.
  5. Based on what I read here, I'd use a pull subscription from the Reporting server to offload traffic from the OLTP server.

You can skip the torment of a snapshot by initializing the reporting database from a backup of the OLTP database. See https://msdn.microsoft.com/en-us/library/ms151705.aspx

There will be INSERT/UPDATE/DELETE traffic from the Replication into both the Distribution and the Subscriber databases. That requires consideration, but lock/block issues should be no worse (and probably better) than running those reports off of OLTP.

I am running multiple publications on a 2.6TB database with 2.5GB/day of growth, using both pure transactional to drive reports (to two reporting servers) and Peer-to-Peer Transactional to replicate data in a scale-out for a SaaS offering (to three more servers). Because of this, we have a separate distributor.

Hope this helps.

Thanks John.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜