Replication agent
Manually configure data replication between FairCom servers using Replication Agent
This section explains how to configure manual replication using Replication Agent.
This section explains how to configure the replication agent using configuration files instead of Replication Manager.
Synchronizing data
To commence replication, desired replicated files must be first synchronized between systems. There are multiple methods to achieve this. Simple file system copies can be used as well as Hot Backups and restores which directly coordinate starting positions with replication. This section will describe the best options for your specific scenario.
Note
When adding a server to a replicated system it is important to pause log purging on the source server before copying the files. This prevents activity on the source server while the files are being copied, which would cause any transaction logs to be deleted.
You will need to start the Replication Agent reading the source server's transaction log at the position that corresponds to the point in time where you copied the files. To accomplish this, you must ensure the source server keeps the transaction logs from the time you copied the files until you have started the new Replication Agent. Step 1 through Step 3 in the Preferred procedures show exactly how this is done. The Quick guide provides the basic steps.
Stop the source server and target servers.
Copy the files.
Block all traffic to the source server at the system or network level.
Note
Any source activity at this point may break the point-in-time relationship between the copied files and the transaction logs, leading to inconsistent data between the source and target. Preferred procedures prevent this vulnerability.
Generate a
ctreplagent_<agentid>.ini
with entry#current
.Start the source server, the target server, and the Replication Agent.
Release traffic to the source server.
The steps below explain the general approach to adding a server to a replicated system and starting it at the correct point in time.
When you are ready to make a copy of the files from the source server, first run
ctstat ‑var
on the source server.Example 1. Runctstat ‑var
# ctstat -var -h 1 -u ADMIN -p ADMIN -s source name lowlog curlog curpos source(SERVER) 6 9 9237667
Observe and take note of the server's current log number (
curlog
) and log position (curpos
).In the Example 1, “Run
ctstat ‑var
”log 9
is the current log used by the server. It is recommended to set the minimum log to the previous log (in this caselog 8
).Use the
ctrepd
utility with the‑setlog
and‑unqid
options to instruct the source server to keep the current log and later logs based on the log number determined in Step 2.Example 2.ctrepd
utilityIf your new Replication Agent will use the
ID MYREPLAGENTID
and you want to set the minimum transaction log to8
, and the source server is namedsource
, use this command.This will cause the source server to keep
log 8
and later logs until your new Replication Agent has been started and has processed that log's operations.# ctrepd -unqid:MYREPLAGENTID -setlog:8 -s source name lowlog curlog curpos source(SERVER) 6 9 9237667 -MYREPLAGENTID 8 0 0
Create a point-in-time copy of the files from the source server using either of the following options:
Dynamic Dump
Use dynamic dump to make a copy of the files from the source server, and then restore them on the target server using the
ctrdmp
utility. The utility creates the filectreplagent_<agentid>.ini
containing the log number and log position for your new Replication Agent to use. This approach has the advantage that the source server is paused only for a brief time while it reaches a quiet transaction state. The server can then resume normal operations while the files are copied to the dynamic dump backup file.Quiesce & Copy
Use the ctQUIET() function (for example through the
ctadmn
utility) to pause the source server and close the files. Runctstat ‑var
on the source server and note the server's current log number and log position after you have paused the source server. Make a copy of the files using a system copy command, then resume server operation. In the Replication Agent's working directory, createctreplagent_<agentid>.ini
as a text file containing the following two lines:log_number log_offset log_number log_offset
Example 3. Two lines in thectreplagent_<agentid>.ini
fileIf your source server is currently on
log 35
at offset12345678
, createctreplagent_<agentid>.ini
containing the following two lines.35 12345678 35 12345678
Copy the files created in Step 4 from the source server to the target server.
If you used Quiesce & Copy in step Step 4 , create
ctreplagent_<agentid>.ini
in the Replication Agent's working directory.Start the Replication Agent.
Note
The Replication Agent looks for the
ctreplagent_<agentid>.ini
file in its working directory when it starts up. If it exists, it sets its position to the position specified in the file. It then deletes thectreplagent_<agentid>.ini
file so it does not use that position again the next time it starts. If you used Dynamic Dump in Step 4, the Replication Agent will find the correct starting point in thectreplagent_<agentid>.ini
created by the dynamic dump restore that you copied to the Replication Agent’s working directory.Using
ctstat ‑var
on the source server, observe that the Replication Agent is now connected and is progressing in processing the transaction logs.
Copy files manually when appropriate. As long as the source and target servers are not operational and data and index files are in a clean state they can be copied with any OS utility such as rsync, or compressed into an archived, transferred, and uncompressed on the remote target. When copying in this manner it is assumed that replication will be commenced from a clean starting point. It is important to remove all existing REPLSTATEDT.FCS
files from the target.
Alternatively, creating a ctreplagent_<agentid>.ini
file with a single #current entry will force replication to commence with the first entry added to existing source transaction logs.
The FairCom DB Dynamic Dump restore process automatically generates a ctreplagent_<agentid>.ini
starting file after a successful restore. The position in ctreplagent_<agentid>.ini
is in sync with the position at which time the restored files were originally backed up.
Create a Dynamic Dump script to back up files for replication.
Stop the secondary server and the Replication Agent if already running.
Execute the Dynamic Dump at the desired point in time on the primary server.
Tip
Use the
!DATE
and!TIME
options to automatically schedule your dump.Using the same dynamic dump script used to back up the files, restore the files on the secondary server with the
ctrdmp
utility.Tip
Consider the
!REDIRECT
option for positioning files if different directory structures are different between the servers.Copy the
ctreplagent_<agentid>.ini
file into the Replication Agent's working directory.Restart the secondary server with the data now in sync with the primary server.
Restart the Replication Agent.
Replication then commences from the log position on the primary server when the files were initially backed up, replicating changes since the backup and quickly synchronizing the secondary server with the primary.
Keep the required transaction logs
Once the Replication Agent starts, it will register its log requirements with the source server (if using the default agent configuration, remember_log_pos
) and from that point on the source server should maintain enough logs for that Replication Agent to continue from its position when it restarts — for example, if a replica went offline for a day, all the logs from the time it stopped will be kept by the source server.
However, before a particular Replication Agent is first connected to a source server (only the very first time) you need to ensure that the transaction logs from the time of your dynamic dump are maintained until you get the Replication Agents started. You can do this by starting and then stopping each Replication Agent prior to making the dynamic dump. This will establish a minimum log requirement associated with that Replication Agent (reader) unique ID so the source server does not discard any required transaction logs until the Replication Agent has consumed them.
If you have already had these agents running against the source server and are simply changing target servers, then you can ignore this since they will already have established a minimum log requirement.
Ideally, the target file in a replication environment would never be out-of-sync with the source. In some situations, the target server or the Replication Agent goes down or loses its connection. When all parts of the system are up again and the connections are reestablished, the Replication Agent is able to catch up with all the updates that must be applied from the moment that it stopped replicating. Although it may take some time to apply the missing updates from the backlog, it should not miss any. Unfortunately, some problems could happen in this process, making it possible that the replica file might get out of sync.
A lost transaction log file from the source side was not applied to the target, making it impossible to bring the target copy back in sync.
A conflict while applying the modification to the target (perhaps an inadvertent update to the target while replication was inactive).
The replication resync
feature provides a way to stop replication on one or more files in a replicated environment and resynchronize them. It copies current files from the source server to the target server and then restarts the replication of these files.
The administrator can use repadm -c resync
and repadm -c resyncclean
to manage the resync
operation. The new modes allow the user to indicate the name of the source file to be synchronized. A text file with a list of source file names can be provided in case of multiple files to be synchronized. Based on the source file name, the Replication Agent identifies the target file based on the existing redirect rules.
Note
FairCom replication support is backward compatible with replication deployments prior to V12. Legacy replication configurations can continue to be used independently of Replication Manager, which may be desirable in scenarios such as an application using a predefined replication setup.
ctagent
plugin
The 2020 release of the FairCom Database Engine introduces plugins, which run applications inside the database process. The ctagent
plugin provides the built-in replication feature of each FairCom server. It replaces previous replication components. All original Replication Agent process functionality is now encapsulated into this advanced multi-threaded server-side module.
Replication models
Note
All modes are available in V12 FairCom servers and newer as well as the Replication Manager for extended flexibility of your replication needs.
Managed
FairCom Replication Manager is the default FairCom replication support. This model takes advantage of a centrally managed replication database for persisting replication plans containing publications and subscriptions. Details for this method are available in the Replication Manager guide.
Manual Replication Manager Remote Mode
In this model, Replication Manager can be deployed and alternatively used exactly as the prior Replication Agent model for existing environments as they migrate to full Replication Manager support. That is, it can be located on any node in your replication network topology. Using Replication Manager in this mode is a smart transition model to consider. This model does not persist replication plans, publications, and subscriptions in the central database. Replication is configured exactly as before in the external
ctreplagent.cfg
files. However, you gain the benefit of having new replication support deployed and ready to tackle new replication challenges.Manual Local Mode
This model enables local direct server replication based on the previous Replication Agent
ctreplagent.cfg
configuration. This mode is much like prior FairCom replication support, however, without a separate Replication Agent executable required. Replication functionality is provided within servers themselves from thectagent
plugin simplifying deployment and configuration.
Configure ctagent
plugin for manual mode
Each FairCom database plugin, including ctagent
, is configured with its own configuration file in JSON format for ease of use.
Manual replication mode refers to using existing ctreplagent.cfg
configurations, that is, not managed via <FC_PROD_REPLCIATION_MGR>
. A simple JSON format configuration in ctagent.json
points to existing configuration files for immediate transitioning to this model.
Note
In manual mode, all replication-related operations (definition reads and persistence) are skipped. From a configuration standpoint, in general, all paths are relative to the configured ctagent
server’s executing process directory.
"managed": < true | false >
The default is
true
, which is the default Replication Manager usage.false
enables manual mode. Whenfalse
, thectagent
plugin doesn't connect to a central replication database and executes within the process in which it is configured.Note
In case of
false
, or omitted, even if the replication database is not available it keeps trying to reconnect automatically from time to time."configurationFileList": [ <cfg file1>, <cfg file>, and so forth ]
The default is an empty array. Each configuration filename element can be a full path or relative to the server working directory. Each element creates one extra instance of a Replication Agent thread to be started from the server
ctagent
plugin. No central replication database connection is necessary for starting these instances, however, ifctagent
is configured within the Replication Manager in this manner, it is also able to start these separatedctreplagent.cfg
instances coexisting with managed instances managed from replication definitions for total flexibility.
Example
Given all classic legacy files are located in a replication
folder alongside config
and server
folders, consider the following manual ctagent
configuration.
{ "managed": false, "configurationFileList":[ "../replication/ctreplagent1.cfg", "../replication/ctreplagent2.cfg" ] }
As with prior replication, authentication files are required in manual mode. In ctreplagent.cfg
, source_authfile
and target_authfile
files should include a path relative to the main server process.
; Source server connection info source_authfile ../replication/source_auth.set source_server FAIRCOMS@localhost ; Target server connection info target_authfile ../replication/target_auth.set target_server FAIRCOMT@localhost
On success, you should find the following messages logged to CTSTATUS.FCS
.
Wed Oct 28 15:49:20 2020 - User# 00026 ctagentd - Starting Agent Wed Oct 28 15:49:20 2020 - User# 00026 ctagentd - Agent is detached from Memphis... Wed Oct 28 15:49:20 2020 - User# 00026 ctagentd - Agent plugin is ready
The replication status log
(ctreplagent.log
) is located in the main server
folder unless specifically configured by the log_file_name
option in ctreplagent.cfg
. This is essential when more than one replication agent is configured such that each replication log is separately maintained.
Fri Nov 13 09:15:28 2020: INF: ** Starting replication agent logging for process 9616 Fri Nov 13 09:15:28 2020: INF: Using replication change buffer version 2 Fri Nov 13 09:15:30 2020: INF: Connected to data target 'FAIRCOMT@localhost'. Fri Nov 13 09:15:30 2020: INF: Connected to data source 'FAIRCOMS@localhost'. Fri Nov 13 09:15:30 2020: INF: Successfully set file filter using XML file '<..\config\replication\filefilter.xml' Fri Nov 13 09:15:30 2020: INF: Started replication agent. Fri Nov 13 09:15:30 2020: INF: Starting scan with log 1, position 65536
Replicated file definitions
Replication support previously required specific server configuration entries in ctsrvr.cfg
defining exactly what files are eligible for replication and allowed wildcard support for flexibility. However, this required server downtime making it an administrative exception task.
Replicated file definitions can now be added directly to your ctreplagent.cfg
file, avoiding downtime on production servers. This allows for easier configuration and ad hoc modifications. An XML-based definition format is used. A file filter definition is used to define replicated files. This can be embedded directly in ctreplagent.cfg
, or referenced from an external XML file.
File filter definition
A replication file filter contains a set of <file> tags. Each <file> tag indicates if the specified filename is included by or excluded from replication.
In the following example, the replication agent using the replication file filter behaves as follows:
a) Reads and applies replicated operations for all files named *.dat in the prod directory and its subdirectories,
b) Ignores replicated operations for all files named *.dat in the prod/temp directory and its subdirectories,
c) Reads and applies replicated operations for all files named *.data, *.dat, and *.db in the ctreeSQL.dbs directory and its subdirectories.
<?xml version="1.0" encoding="us-ascii"?> <replfilefilter version="1" recurse="y" persistent="y" agent="REPLAGENT"> <file status="include">prod/*.dat</file> <file status="exclude">prod/temp/*.dat</file> <file status="include" regexPath=".\ctreeSQL.dbs">.*((data|dat|db))</file> <purpose>create_file</purpose> <purpose>read_log</purpose> </replfilefilter>
Each <file>
entry takes the place of a REPLICATE
ctsrvr.cfg
entry.
Note
The use of expression filters for the file descriptions. Regular expressions allow for detailed and complex file specifications in a single entry.
This XML entry is referenced in your ctreplagent.cfg
file with the file_filter
configuration option.
file_filter <../config/replication/filter1.xml
Note
The use of a single <
character preceding the filename is intended. Do not include a closing >
or you will receive an error.
<purpose>
element defines operations available for this replication usage, including:create_file
read_log
sync_commit
File definitions can be specified by an external XML definition. This allows dynamically enabling replication without stopping and starting servers for configuration changes. This file can be set in ctreplagent.cfg
and changed at runtime using repadm -c changefilter
and -c removefilter
with -x
option indicating the external filter file name. repadm
requests the replication agent to perform the specified filter operation and the replication agent applies the filter to the source server if the purposes indicate that it applies to the source server.
<?xml version="1.0" encoding="us-ascii"?> <replfilefilter version="1" persistent="y" agent="REPLAGENT"> <file status="include">mark.dat</file> <file status="exclude">donotreplicate*.dat</file> <file status="include" regexPath=".\ctreeSQL.dbs">.*((data|dat|db))</file> <purpose>create_file</purpose> <purpose>open_file</purpose> <purpose>read_log</purpose> <purpose>sync_commit</purpose> </replfilefilter>
Attributes
Attribute | Description | Type | Supported values | Defaults | ||
---|---|---|---|---|---|---|
| specifies the unique ID of the replication agent to which the replication file filter applies. Using this option makes it possible for a server to have more than one replication file filter in effect when more than one replication agent is using its own replication file filter. | string | 0 to 31 bytes |
(empty string) | ||
| specifies the persistent mode. If set to "y", the filter definition on the source persists such that it is automatically loaded on startup. | string |
|
| ||
| specifies recursive directory matching for wildcard filenames or not. The supported values are | string |
|
| ||
| specifies the file filter update process. If set to "0", this file filter replaces the existing file filter if it exists. If set to "1", the current replication file filter is updated with this file filter's definitions. | string |
|
| ||
| specifies the XML format version defining the current specification. | String | A value containing a version number in the format "major.minor" | Required - No default value |
Elements
Element | Description |
---|---|
| specifies options for listing files to include or exclude from replication supporting regular expressions NoteThe The following tags are supported:
|
| indicates the purposes of the filter. |
| specifies automatic replication on files that the source server creates. It is similar in its effect to specifying the |
| specifies automatic replication on files that the source server opens. It is similar in its effect to specifying the |
| specifies that the file filter is to be used when reading transaction log entries from the source server. When the file filter includes this purpose, the replication agent reads and applies only the replicated operations for the files whose names match an include rule in the file filter and do not match an exclude rule in the file filter. When the file filter does not include this purpose, the replication agent reads and applies the replicated operations for all files in the source server's transaction logs. When the replication agent uses serial replication, the log read filtering occurs on the source server, and so the replication agent sends the file filter to the source server. When the replication agent uses parallel replication, the log read filtering occurs on the server in which the replication agent is running, because parallel replication reads replicated operations from a copy of the source server's transaction logs. |
| specifies that this replication agent is synchronously replicating the files that are specified in the file filter. When the filter contains the NoteWhen using this option, add |
Several features of replication depend on specific server configurations.
Important
If the server has both SERVER_NAME
and SERVER_PORT
configured, it will always use the SERVER_PORT
, so even if it is a local machine connection, it will use TCP-IP instead of shared memory.
Enable replication by inserting the following lines into the
ctsrvr.cfg
file:; Plugins PLUGIN cthttpd;./web/cthttpd.dll PLUGIN ctagent;./agent/ctAgent.dll
These lines enable the plugin that handles web communication (
services.json
) and the agent that performs the replication (ctagent
).Use the
ctagent.json
andservices.json
files to configure the replication plugin:ctagent.json
{ "embedded": true, "log_file": "agentLog.txt", "memphis_server_name": "MEMPHIS", "memphis_sql_port": 7000, "memphis_host": "localhost", "memphis_database": "MEMPHIS", "ctree_check_mask": "*.dat;*.idx;*.fdd;*.fsd", "inactive_timeout": 600 }
services.json
{ "listening_http_port": 8080, "listening_https_port": 8443, "ssl_certificate": "./web/fccert.pem", "document_root": "./web/apps", "applications": [ "ctree;ctREST.dll", "AceMonitor;ctMonitor.dll", "SQLExplorer;ctSQLExplorer.dll", "ISAMExplorer;ctISAMExplorer.dll", "ReplicationManager;ctReplicationManager.dll" ] }
Optionally enable diagnostic logging by adding the following lines to ctreplagent.cfg:
diagnostics logship diagnostics logread
The replication agent supports optional diagnostic logging for its log ship and log read threads. When diagnostic logging is enabled, these threads write messages to ctreplagent.log to indicate the offset that the log ship thread has written to the copy of the log files, and the log read thread is reading. The offset=[a,b,c] values in the log read message are (a) the requested log read offset, (b) the last byte actually read, and (c) the last requested byte to read. (b) and (c) are equal if the full amount was read (that is, bytesread is equal to size). We expect these log read offset values to always be less than the log ship's current write log position. These example messages show an expected situation: the last byte read by the log read thread is 0x789ab, which is equal to the last byte written by the log ship thread as shown in the "write to log copy position" message.
Use the repadm utility to turn log diagnostic options on and off
Turn on diagnostic logging for log ship and log read threads:
repadm -c setconfig:"diagnostics logship" -s replagent repadm -c setconfig:"diagnostics logread" -s replagent
Turn off diagnostic logging for log ship and log read threads:
repadm -c setconfig:"diagnostics ~logship" -s replagent repadm -c setconfig:"diagnostics ~logread" -s replagent
Example diagnostic messages:
Fri Jul 26 16:40:56 2024: AGENT1: Logship: DEBUG: LOGSHIPDIAG: write to log copy position 8 0x78179 (491897) through 0x789ab (493995) Fri Jul 26 16:40:56 2024: AGENT1: Logship: DEBUG: LOGSHIPDIAG: set read log position to 8 0x789ab (493995) Fri Jul 26 16:40:56 2024: AGENT1: Logship: DEBUG: LOGSHIPDIAG: set write log position to 8 0x789ac (493996)
A replication plugin runs in the FairCom server and executes all data replication.
If you are running Replication Manager on the same machine as the FairCom DB engine, you will need to adjust the ports so they do not conflict. Select unused values for
"listening_http_port"
and"listening_https_port"
. The ports available on your machine will depend on your environment. A good starting point is to simply add1
to each of the port numbers listed in the configuration file.If you want to experiment with replication in your lab, you can set up Replication Manager and two FairCom DB servers on the same machine. In that case, all three servers must use unique ports.
The FairCom server assigns a node ID to FairCom replication. The local/master replication scheme also assigns a node ID to connections that are established from the target server to the source server.
A local FairCom server can set its replication node ID by using the REPL_NODEID
option in ctsrvr.cfg
.
; source server configuration SERVER_NAME MASTER REPL_NODEID 10.0.0.1 ; target server 1 configuration SERVER_NAME LOCAL01 REPL_NODEID 10.0.0.2 ; target server 2 configuration SERVER_NAME LOCAL02 REPL_NODEID 10.0.0.3
ID values are arbitrary and do not need to match the IP address of the system on which the FairCom server is running. They only need to be unique and not change during replication. In this example, IP v4 addresses are demonstrated.
FairCom replication reads the node ID of the source and target servers. If the node ID is not set for a source or target server, FairCom replication uses the IP address of the system on which that FairCom DB is running.
Dynamically set replication node ID
A replication node ID value (REPL_NODEID
) associates a replication agent to a specific source or target server. Having this required association prevents replication agents from arbitrarily connecting to inappropriate servers. Setting a REPL_NODEID
required modifying the server configuration file, ctsrvr.cfg
, and restarting a server, which is challenging in round-the-clock production environments.
The FairCom server now supports setting a REPL_NODEID
value dynamically at runtime if it has not yet been set. This keyword sets the node ID of a local FairCom server for use in replication.
To set the value, use the ctadmn
utility's option to change a server configuration option value using option 10 or call the ctSETCFG()
function.
NINT rc; rc = ctSETCFG(setcfgCONFIG_OPTION, value);
Where value
is the REPL_NODEID
value to set, specified as a NULL-terminated string in IPv4 format (n.n.n.n.
) — for example, "10.0.0.1"
(ID values are arbitrary and do not need to match the IP address of the server).
Note
The value is only changed in memory and not persisted to disk. Be sure to update the configuration file such that the new value is persisted and enabled on the next server startup.
The FairCom server has an option allowing the replication log reader thread to read transaction log data into a buffer for enhanced performance. When a reader requests data that is in the buffer, a noticeable performance gain can usually be observed. This feature is on by default. The size of the replication log read buffer can be set using the configuration keyword.
REPL_READ_BUFFER_SIZE 8192
Without this keyword, the default buffer size is 8 KB. A minimum buffer size of 512 bytes is enforced. A value of zero will disable the replication log read buffer entirely.
To aid in troubleshooting when first using this feature, an additional diagnostic keyword can be specified to check that the log data is correctly read.
DIAGNOSTICS REPL_READ_BUFFER
Serial segment values
To support the replication of a file that contains an auto-incrementing SRLSEG index, FairCom server configuration options are available to control the effect of this value. These values are set in the respective FairCom server's configuration file.
REPL_SRLSEG_ALLOW_UNQKEY YES
causes the FairCom DB server to allow an index that contains a SRLSEG segment to be the replication unique key.Note
The SRLSEG mode of indexes provides for auto-incrementing values to become part of a data record. By default, this index precludes the ability to be the replication unique key index. However, the option is available.
REPL_SRLSEG_USE_SOURCE YES
causes a FairCom server replication writer thread to fill in the serial number value from the source server when adding a record to a replicated file standard replication model.) This preserves existing serial numbering.Note
The actual value of a replicated serial segment value is an application-dependent attribute of replication. To control if this number should be re-assigned or not during replication, this option is provided. By default, the serial segment value is re-assigned when applied to the target server.
REPL_SRLSEG_USE_MASTER YES
causes a local FairCom server to fill in the serial number value from the master FairCom server when adding a record to a local replica (this keyword applies to the synchronous master/local replication model).Note
The actual value of a replicated serial segment value is an application-dependent attribute of replication. To control if this number should be re-assigned or not during replication, this option is provided. By default, the serial segment value is re-assigned when applied to the target server.
IDENTITY values
SQL support frequently requires an additional IDENTITY field beyond the FairCom SRLSEG key mode, already reserved for ROWID values. Replicating IDENTITY values employs additional options.
REPL_IDENTITY_USE_SOURCE YES
causes the target FairCom server to use the identity field value from the source FairCom server when adding a record to a replicated file.REPL_IDENTITY_USE_MASTER YES
causes the local FairCom server to use the serial number value from the master FairCom server when adding a record to a local replica.
The error IMIS_ERR
(958, data record is missing identity field value) is returned by a record add operation when the supplied record image does not contain the identity field value. This is an unusual case, however, could occur if the record image does not contain all the fields for the record.
Note
These options also do not address issues of bidirectional replication involving files containing SRLSEG and/or IDENTITY.
Transaction log requirements for Replication Agent
Note
This is a compatibility change for the 2017 release of the FairCom engine and later.
The FairCom replication thread for TRNLOG files reads transaction logs and informs the server of their minimum transaction log requirements.
The FairCom server now limits by default the number of active transaction logs that are kept for FairCom replication to 100. This default is to avoid potentially running out of critical drive space.
The MAX_REPL_LOGS <max_logs>
configuration option controls this limit by controlling the maximum number of logs to be held specifically for replication.
These sections provide more information about the MAX_REPL_LOGS <max_logs>
setting (and the related setting for deferred indexing logs).
KEEP_LOGS
setting to keep inactive logs
To ensure the database engine always keeps enough logs to allow the log reader to process them before they are deleted, use the KEEP_LOGS
configuration keyword.
KEEP_LOGS
configuration keyword;Keep all transaction logs KEEP_LOGS -1
Specify -1 to keep all logs (in which case old logs must be manually deleted when done), or specify a positive number that indicates the number of logs to keep. This number should be the largest number of logs required if the log reader is stopped or falls behind in processing logs.
A server configuration option has been added that allows you to change the log offset returned by the replication log scan for an operation that consists of multiple log entries, such as REPL_ADDREC
. By default, the offset of the first log entry that comprises that operation is returned.
This configuration option makes it possible to change the behavior so that the offset of the last log entry that comprises the operation is returned.
REPL_USE_LAST_LOGPOS <YES | NO>
The default is
NO
.This option can be changed at runtime with the server administrator utility
ctadmn
.This option should be used when applications expect replication operations to be returned in ascending log offset order.
In addition to this option, the option
COMPATIBILITY NO_REPL_DEFER_TRAN
must be used to avoid reordering of transaction begin operations that are caused by deferring the return of the begin transaction operation until an operation of interest to the replication reader has been read from the log.