FairCom DB maintains data and index caches in which it stores the most recently used data images and index nodes. The caches provide high-speed memory access to data record images and index nodes, reducing demand for slower disk I/O on data and index files. The server writes data and index updates first to cache and eventually to disk. Data and index reads are satisfied from the server’s cache if a cache page contains the requested data record image or index node. When necessary, the server writes pages to disk. The server also supports background flushing of cache buffers during system idle times.
The size of the data and index caches are determined by server configuration file settings. It is also possible to disable caching of specific data files and to dedicate a portion of the data cache to specific data files.
The FairCom DB data cache uses the following approach to cache data record images: if the data record fits entirely within one or two cache pages, then the entire record is stored in the cache.
Note: Even a record smaller than a single cache page may require two cache pages as the record position is generally not aligned on a cache page boundary.
If the data record image covers more than two cache pages, then the first and last segments of the record are stored in the cache, but the middle portion is read from or written to the disk. This direct I/O is generally efficient as the reads are aligned and are for a multiple of the cache page size.
The nature of this approach can be modified. If the server keyword MPAGE_CACHE is set to a value greater than zero, say N, then records that fall within N+2 cache pages will be stored entirely within the cache. The default value is zero. This causes the cache to behave as described above.
Note: Setting MPAGE_CACHE greater than zero does NOT ensure faster system operation. It is more likely to be slower than faster. It does cause more of a record to be in cache, but there is increased overhead managing each individual cache page. The cache pages for consecutive segments of a record (where a segment fills a cache page) are completely independent of each other. They are not stored in consecutive memory. And I/O is performed separately for each cache page. This configuration option should only be used for special circumstances with careful, realistic testing.
To understand FairCom DB’s caching of index nodes, it is necessary to review the relationship between the server’s PAGE_SIZE setting, the size of index cache pages, and the size of index nodes.
The server’s PAGE_SIZE setting determines the size of index cache pages and the size of index nodes for index files created by the server. Because the server must be able to fit an index node into a single index cache page, the server’s PAGE_SIZE setting determines the maximum size of index nodes supported by the server. The server can access index files that were created with an index node size less than or equal to the server’s PAGE_SIZE setting, but attempting to open an index whose index node size exceeds the server’s PAGE_SIZE fails with error KSIZ_ERR (40, index node size too large).
Although caching data benefits server performance, it is important to be aware of the effect of caching data on the recoverability of updates. The state of a cached file after an abnormal server termination depends on FairCom DB options in effect for that file. Below is a summary of the effect of caching on each file type. The following sections discuss this topic in detail.
The memory allocated to the data and index caches is set in the FairCom DB configuration file, ctsrvr.cfg, using the DAT_MEMORY and IDX_MEMORY options. For example, to allocate 1 GB data cache and 1 GB index cache, specify these settings in ctsrvr.cfg:
DAT_MEMORY 1 GB
IDX_MEMORY 1 GB
The server allocates this memory at startup and partitions it into cache pages. The server’s PAGE_SIZE setting determines the size of the data and index cache pages.
The FairCom DB Server’s data and index cache configuration options support specifying a scaling factor used when interpreting cache memory sizes. The supported scaling factors are:
You are always free to specify the exact numerical value as well. The following web page lists those keywords that accept the scaling factors and their limits:
Per File Cache Options
In some cases it may be advantageous to turn off data file caching, especially for files with very long variable length records. Configuration entries of the form:
NO_CACHE <data file name>
result in caching for the specified data files to be disabled. Note that <data file name> may specify a wildcard pattern. You can specify multiple entries for more than one file.
Cache memory can be dedicated to specific data files using the following server configuration option:
SPECIAL_CACHE_FILE <data file name>#<bytes dedicated>
This option is useful for files that you wish to always remain in memory. For example, when occasionally reading other large files, this avoids required files being pushed out of cache.
For even more control, you can also specify percentage of total cache to be reserved for the specially cached files.
This option specifies the overall data cache space that may be dedicated to individual files.
The FairCom DB database engine optionally maintains a list of data files and the number of bytes of data cache to be primed at file open. When priming cache, FairCom DB reads the specified number of bytes for the given data file into data cache when physically opening the data file.
Data files are added to the priming list with configuration entries of the form:
PRIME_CACHE <data file name>#<bytes primed>
The <bytes primed> parameter accepts scaling factors (for example 2GB).
Example: The following keyword instructs the Server to read the first 100,000 bytes of data records from customer.dat into the data cache at file open:
A dedicated thread performs cache priming, permitting the file open call to return without waiting for the priming to be completed.
Use PRIME_CACHE with the SPECIAL_CACHE_FILE keyword to load dedicated cache pages at file open.
To prime index files, use configuration entries of the form:
PRIME_INDEX <index file name>#<bytes primed>[#<member no>]
If the optional <member no> is omitted, all index members are primed. If an index member number is specified, only that member is primed.
For example, the following keyword instructs the Server to read the first 100,000 bytes of index nodes in customer.idx into the index buffer space at file open:
The nodes are read first for the host index, and then for each member until the entire index is primed or the specified number of bytes has been primed.
The following example restricts the priming to the first member (the index after the host index):
The <data file name> or <index file name> can be a wild card specification using a ‘?’ for a single character and a ‘*’ for zero or more characters.
FairCom DB also supports priming the data cache in forward AND reverse order by index.
Primes up to 100,000,000 bytes from mark.dat reading by the first index in reverse key order.
Control of Cache Flushing
By default, FairCom DB creates two idle flush threads at startup. One thread flushes transaction file buffers, and the other flushes non-transaction file buffers. The threads wake up periodically and check if the server is idle. If so, the flush is begun. If subsequent server activity causes the server to no longer be idle, then the flushes are terminated. Low priority background threads, such as the delete node thread, do not affect the server’s idle state. FairCom DB clients and FairCom DB SQL clients and checkpoints do cause the idle status to be modified.
The configuration file can modify the characteristics of these idle flush threads:
IDLE_TRANFLUSH <idle check-interval for tran files>
IDLE_NONTRANFLUSH <idle check-interval for nontran files>
The idle check interval values are specified in seconds. Setting the interval to zero or a negative value disables the thread. The defaults for these intervals are each set at 15 seconds.