Skip to main content

Partition a table present in merge replication and deploy the changes on production


I have been playing with merge application of late, but for production changes and not on a sample project. Let me first share our environment with you. We have a central server and few number of clients. Server and client share same schema. We have used merge replication to replicate the changes.

distributed
There were some load issues and thus few changes have to be done. I will list down the change, which I am going to talk about today.
  • Partition a table vertically.
  • Transfer data from the existing table into the new table.
  • Make sure that the network usage (replication) will be as less as possible.
This is a normal scenario for those who are working on distributed environment. I am sharing this post for all those who will work on such environments in future Smile.

Challenges
For once it all seems so simple. We can run a data migration script which will transfer data from table A to table B on client and server and we are good. But, the problem is that then the merge replication will try to sync the data, and we are partitioning images. Thus the network contention will be so huge that any business processing my start timing out. Also we can not have long downtime.

Solution
The solution is:

Create the new table on publisher and propagate the changes to subscriber. (for detail information see this).

Disable merge replication triggers on the new table on publisher and subscriber both.
USE [database_name];
GO
DISABLE TRIGGER ALL ON <table_name>;
GO

By disabling the triggers we are making sure that merge replication will not try to sync the changes on this table.

Create a job which will keep transferring data from table A to the new partitioned table B on live environment but on nightly basis. This will make sure downtime is not huge.

Stop all data processing on the publisher and subscriber.

Run merge to bring them in sync.

Run data migration job to sync the newly added table.

Now perform any other changes that you need to, for supporting partitioning.

Enable triggers.
USE [database_name];
GO
ENABLE TRIGGER ALL ON <table_name>;
GO

You have deployed the changes without any network contention. This is because merge will not sync the data of the new table as the triggers were off and all other tables are already in sync. This also will make sure that the downtime is as small as you like it to be.

Comments

Popular posts from this blog

Create a background / taskbar application in c# .NET

Recently, I was working on integration of two windows applications. First application will launch the second application on login and then they both will communicate using pre-defined set of instructions. There were some complications (I am not going into them) and thus we decided to have a third application which actually will act as mediator. First application will launch the mediator (third application) and it will launch the second application. For this purpose we needed to create a task bar application (which will run in background). How To ·          Create a new windows project and delete the default form (Form1). ·          In Program.cs create a new class and inherit it from Form. ·          Please refer the code below. ·          Now change the Main method. In Application.Run change the startup objec...

Check SQL Server Job status (State) using sp_help_job and xp_sqlagent_enum_jobs

This article is about checking the status of a SQL job. In our work place we have lot of SQL jobs. These jobs will run whole day and are business critical. They will load the data and generate extracts which will be used by business people. Thus, it becomes quite essential to support the system efficiently so that the job finishes in time and as desired. Also, while designing a new system sometimes we need to check the dependency of one job over another. In such scenario we need to check whether a particular job has finished or not. All this can be achieved in SQL Server by using the procedures:- sp_help_job xp_sqlagent_enum_jobs Note: xp_sqlagent_enum_jobs is an undocumented proc inside of sp_help_job and is used extensively to get SQL agent job information. sp_help_job: This procedure gives some insight into the status, and information, about a job. This stored procedure provides information such as last start time, job status etc. Syntax sp_help_job { [ @job_id= ] jo...

Java 8 JMX Default Metrics

This is more of a note. Here you can find default types and attributes for JMX on top of Java 8. Code: I will clean and explain it later :( private static void WriteAttributes(final MBeanServer mBeanServer, final ObjectName http) throws InstanceNotFoundException, IntrospectionException, ReflectionException { MBeanInfo info = mBeanServer.getMBeanInfo(http); MBeanAttributeInfo[] attrInfo = info.getAttributes(); System.out.println("Attributes for object: " + http +":\n"); for (MBeanAttributeInfo attr : attrInfo) { System.out.println(" " + attr.getName() + "\n"); } } Attributes for object: java.lang:type=MemoryPool,name=Metaspace:   Name   Type   Valid   Usage   PeakUsage   MemoryManagerNames   UsageThreshold   UsageThresholdExceeded   UsageThresholdCount   UsageThresholdSupported   CollectionUsageThreshold   Collectio...