Synchronization Pattern

Many times I wrote about “decoupling” and “integration” between Business Central and other systems, now I want to share my approach to data synchronization.

Let’s take a third party WMS system as an example. These are the requirements:

  • Real time communication with automatic store
  • On premise installation (one per warehouse)
  • Continuous modification/creation of inventory items
  • High volume of warehouse picks

We need to replicate from BC to WMS, as fast as possible, these tables:

  • Item
  • Item Cross Reference
  • Warehouse Activity Header
  • Warehouse Activity Line

At the end of the pick, the WMS will call a web services with the activity result.

Tracking modifications

Each BC table has two interesting fields about synchronization:

  • $systemId
  • $systemModifiedAt

In AL these fields can be filtered via RecordRef.SystemModifiedAtNo() and RecordRef.SystemIdNo() but unfortunately are unuseful in this approach.

$systemId is a unique identifier of the record (not an hash of the primary key) so, if you add a new record with “001” as primary key, then you delete that record and after you create again the same record, two or more unique identifiers will be generated, increasing the log of changes and failing the match of the previous ones.

$systemModifiedAt can be used to find the records modified after a certain date/time. But what about server clock adjustment or skew? And what about deletion?

We need a “Synchronization Entry” table to track insertion, modification and deletion of records.

💡 To avoid locking, this table must have an autoincrement integer (or biginteger) primary key.

💡 To avoid locking, this table should never be readed or modified during a transaction but only appended

💡 Because the table is appended-only, you need a scheduled task to compress multiple insertion of the same record

This is an example of a working table (pay attention to primary key fields, in this example we track only the first 4 fields).

table 80000 "TST Synchronization Entry"
{
    DataClassification = CustomerContent;
    Caption = 'Synchronization Entry';

    fields
    {
        field(1; "Entry No."; BigInteger)
        {
            DataClassification = CustomerContent;
            AutoIncrement = true;
            Caption = 'Entry No.';
        }
        field(2; "Table ID"; Integer)
        {
            DataClassification = CustomerContent;
            Caption = 'Table ID';
        }
        field(20; "Primary Key 1"; Code[50])
        {
            DataClassification = CustomerContent;
            Caption = 'Primary Key 1';
        }
        field(21; "Primary Key 2"; Code[50])
        {
            DataClassification = CustomerContent;
            Caption = 'Primary Key 2';
        }
        field(22; "Primary Key 3"; Code[50])
        {
            DataClassification = CustomerContent;
            Caption = 'Primary Key 3';
        }
        field(23; "Primary Key 4"; Code[50])
        {
            DataClassification = CustomerContent;
            Caption = 'Primary Key 4';
        }
    }

    keys
    {
        key(PK; "Entry No.")
        {
            Clustered = true;
        }
        key(I01; "Table ID", "Primary Key 1", "Primary Key 2", "Primary Key 3", "Primary Key 4") { }
    }
}

Now we need to subscribe the legendary 😎 “Global Triggers”, appending the log for every change:

    local procedure TableLogIsNeeded(RecNo: Integer) : Boolean
    begin
        // your logic here
    end;

    [EventSubscriber(ObjectType::Codeunit, Codeunit::"Global Triggers", 'OnDatabaseInsert', '', false, false)]
    local procedure OnGlobTriggDatabaseInsert(RecRef: RecordRef)
    begin
        if TableLogIsNeeded(RecRef.Number) then
            LogRecord(RecRef);
    end;

    [EventSubscriber(ObjectType::Codeunit, Codeunit::"Global Triggers", 'OnDatabaseModify', '', false, false)]
    local procedure OnGlobTriggDatabaseModify(RecRef: RecordRef)
    begin
        if TableLogIsNeeded(RecRef.Number) then
            LogRecord(RecRef);
    end;

    [EventSubscriber(ObjectType::Codeunit, Codeunit::"Global Triggers", 'OnDatabaseDelete', '', false, false)]
    local procedure OnGlobTriggDatabaseDelete(RecRef: RecordRef)
    begin
        if TableLogIsNeeded(RecRef.Number) then
            LogRecord(RecRef);
    end;

    [EventSubscriber(ObjectType::Codeunit, Codeunit::"Global Triggers", 'OnDatabaseRename', '', false, false)]
    local procedure OnGlobTriggDatabaseRename(RecRef: RecordRef; xRecRef: RecordRef)
    begin
        if TableLogIsNeeded(RecRef.Number) then begin
            LogRecord(xRecRef);
            LogRecord(RecRef);
        end;
    end;

The LogRecord procedure simply append the record primary key to the “Synchronization Entry” table:

    local procedure LogRecord(RecRef: RecordRef)
    var
        SyncEntry: Record "TST Synchronization Entry";
        KRef: KeyRef;
    begin
        SyncEntry.Init();
        SyncEntry."Entry No." := 0;
        SyncEntry."Table ID" := RecRef.Number;

        KRef := RecRef.KeyIndex(1);
        if KRef.FieldCount >= 1 then
            SyncEntry."Primary Key 1" := CopyStr(Format(KRef.FieldIndex(1).Value, 0, 9), 1, MaxStrLen(SyncEntry."Primary Key 1"));
        if KRef.FieldCount >= 2 then
            SyncEntry."Primary Key 2" := CopyStr(Format(KRef.FieldIndex(2).Value, 0, 9), 1, MaxStrLen(SyncEntry."Primary Key 2"));
        if KRef.FieldCount >= 3 then
            SyncEntry."Primary Key 3" := CopyStr(Format(KRef.FieldIndex(3).Value, 0, 9), 1, MaxStrLen(SyncEntry."Primary Key 3"));
        if KRef.FieldCount >= 4 then
            SyncEntry."Primary Key 4" := CopyStr(Format(KRef.FieldIndex(4).Value, 0, 9), 1, MaxStrLen(SyncEntry."Primary Key 4"));

        SyncEntry.Insert();
    end;

This is how the table looks like after a while:

Retrieving data

On the client side, you need only to remember the last synchronized “Entry No.”. A proper web service will return you all changes from that number onwards. If your synchronized archive is corrupted, you need only to reset “Entry No.” to zero and start again the synchronization.

💡 Even if you are authenticated in Business Central to call this web service, it’s better to add data security and filtering avoiding download of private or unwanted data.

    procedure DownloadTable(RecNo: Integer; StartingNo: BigInteger) Result: JsonObject
    var
        SyncEntry: Record "TST Synchronization Entry";
        RecRef: RecordRef;
        FRef: FieldRef;
        KRef: KeyRef;
        I: Integer;
        JObject: JsonObject;
        JLines: JsonArray;
    begin
        RecRef.Open(RecNo);
        KRef := RecRef.KeyIndex(1);

        SyncEntry.Reset();
        SyncEntry.SetFilter("Entry No.", '>%1', StartingNo);
        SyncEntry.SetRange("Table ID", RecRef.Number);
        if SyncEntry.FindSet() then
            repeat
                if KRef.FieldCount >= 1 then
                    SetFilterFromSyncEntry(KRef, 1, SyncEntry."Primary Key 1");
                if KRef.FieldCount >= 2 then
                    SetFilterFromSyncEntry(KRef, 2, SyncEntry."Primary Key 2");
                if KRef.FieldCount >= 3 then
                    SetFilterFromSyncEntry(KRef, 3, SyncEntry."Primary Key 3");
                if KRef.FieldCount >= 4 then
                    SetFilterFromSyncEntry(KRef, 4, SyncEntry."Primary Key 4");

                if RecRef.FindFirst() then begin
                    JObject := CopyRecordRefToJsonObject(RecRef);
                    JObject.Add('$action', 'upsert');

                end else begin
                    Clear(JObject);
                    I := 1;
                    while (KRef.FieldCount >= I) and (I <= 4) do begin
                        FRef := KRef.FieldIndex(I);
                        FRef.Value := FRef.GetRangeMax();
                        JObject.Add(FRef.Name, FieldRefToJsonValue(FRef));
                        I += 1;
                    end;
                    JObject.Add('$action', 'delete');

                end;

                JLines.Add(JObject); 
            until (SyncEntry.Next() = 0) or (JLines.Count >= 1000);

        Result.Add('lines', JLines);
        Result.Add('endingNo', SyncEntry."Entry No.");
    end;

The web service uses 3 procedures:

  • SetFilterFromSyncEntry to filter RecRef with the logged primary key
  • CopyRecordRefToJsonObject to create a JSON object with RecRef values
  • FieldRefToJsonValue to JSON encode a FieldRef value

Here the skeleton of these procedures.

    local procedure SetFilterFromSyncEntry(var KRef: KeyRef; Index: Integer; KeyValue: Text)
    var
        FRef: FieldRef;
        IntVal: Integer;
        // ...
    begin
        FRef := KRef.FieldIndex(Index);

        case FRef.Type of
            FRef.Type::Code,
            FRef.Type::Text:
                FRef.SetRange(KeyValue);
            FRef.Type::Integer,
            FRef.Type::Option:
                begin
                    Evaluate(IntVal, KeyValue);
                    FRef.SetRange(IntVal);
                end;
            // ...
        end;
    end;
    procedure CopyRecordRefToJsonObject(var RecRef: RecordRef) Result: JsonObject
    var
        FldRef: FieldRef;
        I: Integer;
        JName: Text;
        JValue: JsonValue;
    begin
        for I := 1 to RecRef.FieldCount() do begin
            FldRef := RecRef.FieldIndex(I);
            JName := FldRef.Name();

            if FldRef.Class() = FieldClass::Normal then begin
                JValue := FieldRefToJsonValue(FldRef);
                if not JValue.IsNull() then
                    Result.Add(JName, JValue);
            end;
        end;
    end;
    procedure FieldRefToJsonValue(var FldRef: FieldRef) JValue: JsonValue;
    var
        TmpInt: Integer;
        TmpBigInt: BigInteger;
        // ...
    begin
        case FldRef.Type() of
            FieldType::Integer,
            FieldType::Option:
                begin
                    TmpInt := FldRef.Value;
                    JValue.SetValue(TmpInt);
                end;
            FieldType::BigInteger:
                begin
                    TmpBigInt := FldRef.Value;
                    JValue.SetValue(TmpBigInt);
                end;
            // ...
        end;
    end;

This is a stripped example of a JSON result for the “Item” table:

{
  "endingNo": 3512637,
  "lines": [
    {
      "No.": "0002772",
      "Description": "VARIMED ATROMB MCOL DX 5 COR 1",
      "Base Unit of Measure": "PZ",
      "Blocked": false,
      "VAT Bus. Posting Gr. (Price)": "",
      "VAT Prod. Posting Group": "P22",
      "YNS AIC Code": "930509284",
      "$action": "upsert"
    },
    {
      "No.": "0002774",
      "Description": "CREON*100CPS 300MG",
      "Base Unit of Measure": "PZ",
      "Blocked": true,
      "VAT Bus. Posting Gr. (Price)": "",
      "VAT Prod. Posting Group": "P10",
      "YNS AIC Code": "029018013",
      "$action": "upsert"
    },
    {      
      "No.": "0002776",
      "$action": "delete"
    }
  ]
}

Applying changes

Once received the JSON packet with changed data, the client:

  • If the action is “upsert” insert or modify the local record with server data
  • If the action is “delete” delete the local record with the same primary key

With OData / API web service and JSON format you can easily integrate any system, with Cloud or On Premise Business Central environment.

For example in my middlewares I have only to declare which table I want to synchronize:

Conclusion

There are many methods to synchronize BC data with other system, this one is quite easy, affordable and very fast, with a minimum overhead on the BC side.

Data synchronization is a key point for a seamless integration project, allowing BC to keep the master data and orchestrate pheriperal processes.