I have been playing with merge application of late, but for production changes and not on a sample project. Let me first share our environment with you. We have a central server and few number of clients. Server and client share same schema. We have used merge replication to replicate the changes.
There were some load issues and thus few changes have to be done. I will list down the change, which I am going to talk about today.
- Partition a table vertically.
- Transfer data from the existing table into the new table.
- Make sure that the network usage (replication) will be as less as possible.
Challenges
For once it all seems so simple. We can run a data migration script which will transfer data from table A to table B on client and server and we are good. But, the problem is that then the merge replication will try to sync the data, and we are partitioning images. Thus the network contention will be so huge that any business processing my start timing out. Also we can not have long downtime.
Solution
The solution is:
Create the new table on publisher and propagate the changes to subscriber. (for detail information see this).
Disable merge replication triggers on the new table on publisher and subscriber both.
USE [database_name];
GO
DISABLE TRIGGER ALL ON <table_name>;
GO
By disabling the triggers we are making sure that merge replication will not try to sync the changes on this table.
Create a job which will keep transferring data from table A to the new partitioned table B on live environment but on nightly basis. This will make sure downtime is not huge.
Stop all data processing on the publisher and subscriber.
Run merge to bring them in sync.
Run data migration job to sync the newly added table.
Now perform any other changes that you need to, for supporting partitioning.
Enable triggers.
USE [database_name];
GO
ENABLE TRIGGER ALL ON <table_name>;
GO
You have deployed the changes without any network contention. This is because merge will not sync the data of the new table as the triggers were off and all other tables are already in sync. This also will make sure that the downtime is as small as you like it to be.
Comments
Post a Comment