FairCom Replication Administrator's Guide

Next Topic

FairCom Data Replication

User Guide

FairCom Replication Administrator's Guide




Deploying managing and monitoring FairCom Replication for clustered scalability and high availability solutions


© Copyright 2022, FairCom Corporation. All rights reserved. For full information, see the FairCom Copyright Notice.

 FairCom Replication Manager makes it easy to create database clusters to meet your exact needs for scalability and availability. It allows you to combine various replication and failover techniques to create the level of availability and scalability you need.

Each FairCom Server has data replication and failover built into it. From a central location, Replication Manager allows you to configure replication and failover across FairCom Servers.

Audience for this Document

This document explains how to use the Replication Manager to control your replication environment. It is intended for the following audiences:

  • Administrators who will use Replication Manager to simplify the management of their replication.
  • Developers who will use Replication Manager as part of a solution to replicate to other c-tree systems or to the cloud.

Learn More:


In This Chapter

High Availability and Scalability Within and Between Data Centers

High Availability

File/Table Requirements

Replication Network Recommendations

Next Topic

High Availability and Scalability Within and Between Data Centers

Data replication is built into the FairCom Server. Parallel processing threads replicate data at high speed while ensuring transaction consistency. Data can be asynchronously or synchronously replicated to one or more nodes. Asynchronous replication is ideal for replicating data across data centers or for high-speed, eventually consistent use cases. Synchronous replication is ideal for implementing clustered nodes for high availability solutions.

Replication ensures all changes to a table/file are replicated, including the replication of newly created, altered, and deleted tables and files. The data that is replicated in a table / file can be filtered such that it replicates a subset of the table’s data.

A browser-based graphical user interface, called Replication Manager, runs in a central location. It makes it easy to configure, manage and monitor data replication across hundreds of servers and thousands of tables and files. FairCom Replication can also be automated through a JSON/HTTP web service API and a C/C++ API.

Replicated tables/files can be read-only or read-write. Replication can also be bidirectional and server nodes configured to detect failures and fail over automatically. This flexibility allows implementing nearly any scenario including ACID-consistent high availability, eventually consistent high availability, disaster recovery, massive horizontal scale (read-only or read-write), globally distributed data (read-only and read-write), global regulatory compliance, sharing data across microservices, etc.

Review the prerequisite document Clustering for Scalability and Availability for a comprehensive survey of major use cases and corresponding solutions.

Next Topic

High Availability

High Availability (HA) is the ability to remain available with no interruption to service during a system outage. Its purpose is to provide uninterrupted access to an application over a long time. It requires at least two servers running on different hardware. It optionally uses redundant hardware on each server and may run servers inside of virtual machines that automatically and transparently move off failed hardware servers to working hardware. It also requires mechanisms to detect software or hardware failure as well as mechanisms to ensure that a failed server remains down so that it does not process data unexpectedly and create data inconsistencies. Lastly, it provides mechanisms to automatically fail over database software and notify client applications to reconnect to the proper running database server.

FairCom’s fall 2020 release provides integration with Linux and Windows OS failover solutions as well as its own database failover solution.

Disaster Recovery (DR) is the ability to recover from a catastrophic regional failure in a few hours or a few days. Its purpose is to recover from a disaster by running an application in a different geographical region. This requires databases to be running in multiple data centers in different regions. The databases replicate data continuously from one region to another.

FairCom has provided asynchronous replication for many years, which is the ideal solution for this use case.

FairCom now allows an application to combine DR with HA, which gives applications maximum resiliency.

Learn more...

For more information about the use cases for FairCom replication, see Use Cases for FairCom Replication in Clustering for Scalability and Availability.

Next Topic

File/Table Requirements

Note: It is important to ensure that the FairCom Replication version is in sync with the versions of FairCom Server running on the source and target. It is important to install exactly the same build in all of these places.

A new Replication Agent is incompatible with log entries from an old Replication Agent and an old Replication Agent is incompatible with a new log entry. This incompatibility shows up as replication of file create failing with error 43 (FVER_ERR). When updating a source server, the Replication Agent must also be updated.

In V12 and later, replication supports files with: Superfiles, Hot Alter Table, and Partition file starting with V12 and later.

Transaction Processing

FairCom Replication relies on a transaction log-based facility. Your files must be under transaction control to take advantage of FairCom Replication. Review the c-tree Programmer’s Reference Guide for complete information on data integrity and enabling transaction processing at the file level.

Indexes and Data Files

For a file to be replicated there must be a unique qualifying replication index on that file. Several conditions must be met for an index to qualify as a replication index. The following points describe these conditions:

  1. The index must be unique.
  2. The index must not have null key support enabled.
  3. The index must not have a conditional index expression.
  4. The index must be a permanent (not temporary) index.
  5. Low-level updates are not allowed on files marked for replication.

    Note: Indexes with serial segments (SRLSEG) and/or IDENTITY support require additional configuration considerations as described in Replicating Auto-Incrementing Serial Segment Values.

  6. Data files require an extended header to be eligible for replication.

IFIL Resource

The data files being replicated must contain an IFIL resource.

Next Topic

Replication Network Recommendations

Purchase the fastest Ethernet Network Interface Cards (NICs) compatible with your server. For high availability, purchase 2 NICs per server. We recommend 100GbE NICs. The model of NICs depends on the server hardware. For failover between 2 servers, the NICs can be directly connected to each other. For failover across 3 or more servers, the NICs must be connected to a compatible network switch that runs at the same speed or faster than the NICs.

For port access in firewalls, see below.

For the Replication Manager:

Add incoming network port rules for:

  • HTTP port 80 or HTTP port 443.
  • Custom TCP port 5532 (This is the FairCom RM ISAM port. It depends on the server name and can be found in the CTSTATUS log file.)
  • Custom TCP port 7000 (This is the FairCom RM SQL port. It can be found in the ctsrvr.cfg config file.)

For Replication Clients:

Add incoming network port rules for:

  • Custom TCP port 5597 (This is the FairCom DB ISAM port. It depends on the server name and can be found in the CTSTATUS log file.)
  • Custom TCP port 6597 (This is the FairCom DB SQL port. It can be found in the ctsrvr.cfg config file.)
  • HTTP port 80 or HTTP port 443. (This is only needed if you want to access the web tools.)