Product Documentation

Database Administrator's Guide

Previous Topic

Next Topic

Millions of Records per Transaction

Large transactions are sometimes unavoidable. Recently presented situations include a case where a large-scale database purge included many cross-referenced tables. Another case involved a full data table version update. In both cases, the transaction size challenged existing limits of performance. Ultimately, regardless of size, database changes must persist to the write-ahead log for atomicity and recovery. FairCom database optimizations make these as fast as possible rivaling, or beating other database comparisons.

FairCom DB can efficiently support large transactions without consuming excessive memory. Enhanced configuration options limit the amount of data stored in memory during a transaction. After the transaction has reached this limit, the server creates a swap file and stores subsequent data in the swap file.

Note that internal structures remain allocated, so memory use still increases somewhat. However, as record images and key values are temporarily written to the swap file, memory usage is greatly reduced when updating many records or very large records.

This feature is controlled with the following configuration options in ctsrvr.cfg:

  • MAX_PREIMAGE_DATA <limit> sets the maximum size of in-memory data that is allocated by a transaction to <limit>. After this data limit has been reached, subsequent allocations are stored in a preimage swap file on disk. The default value is 1 GB.
  • MAX_PREIMAGE_SWAP <limit> sets the maximum size of the preimage swap file on disk. If the file reaches its maximum size and a transaction attempts to allocate more space in the file, the operation fails with error TSHD_ERR (72). The default value is zero (meaning no limit).

Security Note: The preimage swap file contains data record images and key values that the transaction updated. Note that even if the corresponding data file or index file has encryption enabled, the preimage swap file contents are only encrypted if the LOG_ENCRYPT configuration option is used. If advanced encryption is enabled, the preimage swap file is encrypted using the AES-32 cipher. If not, the preimage swap file uses FairCom's proprietary ctCAMO algorithm, which is a simple masking of the contents, it is not a form of industry-standard encryption.

Changes to Hashing and PREIMAGE_HASH_MAX

When updating many records in one transaction, the update rate slowed over time. This happened even if a table lock was acquired on the table and preimage memory use was reduced with the MAX_PREIMAGE_DATA configuration option.

A hash table is used to efficiently search preimage space entries containing updated record images. This modification improves the hash function in these areas:

  1. It no longer imposes a limit of 1048583 hash bins.
  2. Improved the hash function to provide a more even distribution of the values over the hash bins.

For maximum performance, the hash function can now use up to 2^31-1 hash bins. We also modified the lock hash function in the same manner.

Prior to this change, dynamic hashing defaulted to a maximum number of hash bins of 128K. PREIMAGE_HASH_MAX can raise this limit.

Hint: PREIMAGE_HASH_MAX and LOCK_HASH_MAX keyword values larger than 1 million can provide additional performance benefits for large transactions.

Default: The default value for the PREIMAGE_HASH_MAX configuration option has been changed from 131072 to 1048576 so that our dynamic hash of the preimage space entries can be more effective for large transactions without requiring the server administrator to remember to increase this setting.

TOCIndex