FairCom Database Engine supports setting a maximum number of active partitions on a partitioned file. When a new partition is created, if the new number of active partitions exceeds the limit, the oldest partitions are purged.
This feature is well-suited for a time-based or incrementing sequence number partition key. In this situation, the partition number matches the time order of the partition creation, and so the automatic purge provides a way to keep the N most recent partitions.
Support for setting the maximum number of partitions has been implemented in the FairCom Low-Level API function PUTHDR(). This feature is also available through the c-treeDB, C++, and REST APIs.
To set this value: After creating the partition host, call PUTHDR(partition_host_data_file_number, max_partition_members, ctMAXPARTMBRhdr).
A value of zero for max_partition_members is the default, and means no maximum number of partitions.
Note that c-tree itself supports up to 65535 active partitions per partition host, so this PUTHDR() call fails with error PBAD_ERR if a max_partition_members value larger than 65535 is specified.
Typical Scenario
The following example clarifies how the partition purge behaves:
Note: The number of active partitions is calculated based on the highest-numbered active partition. When a new highest-numbered partition N comes into existence, only that partition and max_partition_members - 1 consecutive partitions preceding that partition are kept.
Limitations:
API Changes:
The PUTHDR() function now accepts a mode of ctMAXPARTMBRhdr, and the specified value sets the maximum number of partitions on a partitioned host file.
The ctFILBLK() function now accepts a new mode bit, ctFBfailIfTranUpdated. When this mode bit is used, if a file that is being blocked has been updated in a transaction that is still active, instead of aborting that transaction the file block fails with error code 1136 (FBTU_ERR): "File block failed because the file has been updated in an active transaction and the caller requested that the file block should fail in this situation instead of aborting the transaction."