Skip to main content

Partition a table present in merge replication and deploy the changes on production


I have been playing with merge application of late, but for production changes and not on a sample project. Let me first share our environment with you. We have a central server and few number of clients. Server and client share same schema. We have used merge replication to replicate the changes.

distributed
There were some load issues and thus few changes have to be done. I will list down the change, which I am going to talk about today.
  • Partition a table vertically.
  • Transfer data from the existing table into the new table.
  • Make sure that the network usage (replication) will be as less as possible.
This is a normal scenario for those who are working on distributed environment. I am sharing this post for all those who will work on such environments in future Smile.

Challenges
For once it all seems so simple. We can run a data migration script which will transfer data from table A to table B on client and server and we are good. But, the problem is that then the merge replication will try to sync the data, and we are partitioning images. Thus the network contention will be so huge that any business processing my start timing out. Also we can not have long downtime.

Solution
The solution is:

Create the new table on publisher and propagate the changes to subscriber. (for detail information see this).

Disable merge replication triggers on the new table on publisher and subscriber both.
USE [database_name];
GO
DISABLE TRIGGER ALL ON <table_name>;
GO

By disabling the triggers we are making sure that merge replication will not try to sync the changes on this table.

Create a job which will keep transferring data from table A to the new partitioned table B on live environment but on nightly basis. This will make sure downtime is not huge.

Stop all data processing on the publisher and subscriber.

Run merge to bring them in sync.

Run data migration job to sync the newly added table.

Now perform any other changes that you need to, for supporting partitioning.

Enable triggers.
USE [database_name];
GO
ENABLE TRIGGER ALL ON <table_name>;
GO

You have deployed the changes without any network contention. This is because merge will not sync the data of the new table as the triggers were off and all other tables are already in sync. This also will make sure that the downtime is as small as you like it to be.

Comments

Popular posts from this blog

Laravel XAMPP MySQL artisan migrate install error mysql.sock

In my previous post I wrote about setting up Laravel 4 on Mac with XAMPP . When I tried to use migrations (using artisan) I faced some issue. I will also post about Laravel migrations soon.    php artisan migrate:install Error :                                                     [PDOException]                                       SQLSTATE[HY000] [2002] No such file or directory                                              Solution : We need to tell artisan which mysql we want it to use. For this to work we need to porivde it with mysql.sock which we want it to use. So change your database settings like this: 'mysql' => array( 'driver'    => 'mysql', 'host'      => 'localhost',     'unix_socket' => '/Applications/xampp/xamppfiles/var/mysql/mysql.sock', 'database'  => 'wedding', 'username'  => 'root', 'password'  => '', '

Add (Drop) a new article (table) at publisher in merge replication

Let us add the article using the GUI first. First of all create the table on publisher database. Now right click on the publisher and select properties from the menu. Click on articles. Uncheck the - "Show only checked articles in the list" checkbox, in order to see the newly added table. Select the table as shown in the figure below. Press ok The article has now been added to the publisher We now need to recreate the snapshot Right click the publication and select – “View snapshot agent status”. Click start to regenerate snapshot. Now right click on the subscription (I have both on same server you may have on different servers) and select “View synchronization status” Click on start on the agent. The schema changes will propagate to the client if you have "Replicate schema changes" property set to true in the publisher.

Check SQL Server Job status (State) using sp_help_job and xp_sqlagent_enum_jobs

This article is about checking the status of a SQL job. In our work place we have lot of SQL jobs. These jobs will run whole day and are business critical. They will load the data and generate extracts which will be used by business people. Thus, it becomes quite essential to support the system efficiently so that the job finishes in time and as desired. Also, while designing a new system sometimes we need to check the dependency of one job over another. In such scenario we need to check whether a particular job has finished or not. All this can be achieved in SQL Server by using the procedures:- sp_help_job xp_sqlagent_enum_jobs Note: xp_sqlagent_enum_jobs is an undocumented proc inside of sp_help_job and is used extensively to get SQL agent job information. sp_help_job: This procedure gives some insight into the status, and information, about a job. This stored procedure provides information such as last start time, job status etc. Syntax sp_help_job { [ @job_id= ] jo