Product Documentation

Knowledgebase

Previous Topic

Next Topic

Data and Index Caching Options

FairCom DB maintains data and index caches in which it stores the most recently used data images and index nodes. The caches provide high-speed memory access to data record images and index nodes, reducing demand for slower disk I/O on data and index files. The server writes data and index updates first to cache and eventually to disk. Data and index reads are satisfied from the server’s cache if a cache page contains the requested data record image or index node. When necessary, the server writes pages to disk. The server also supports background flushing of cache buffers during system idle times.

The size of the data and index caches are determined by server configuration file settings. It is also possible to disable caching of specific data files and to dedicate a portion of the data cache to specific data files.

In This Section

Caching of Data Records

Caching of Index Nodes

Properties of Cached Files

Configuring Caching

Previous Topic

Next Topic

Caching of Data Records

The FairCom DB data cache uses the following approach to cache data record images: if the data record fits entirely within one or two cache pages, then the entire record is stored in the cache.

Note: Even a record smaller than a single cache page may require two cache pages as the record position is generally not aligned on a cache page boundary.

If the data record image covers more than two cache pages, then the first and last segments of the record are stored in the cache, but the middle portion is read from or written to the disk. This direct I/O is generally efficient as the reads are aligned and are for a multiple of the cache page size.

The nature of this approach can be modified. If the server keyword MPAGE_CACHE is set to a value greater than zero, say N, then records that fall within N+2 cache pages will be stored entirely within the cache. The default value is zero. This causes the cache to behave as described above.

Note: Setting MPAGE_CACHE greater than zero does NOT ensure faster system operation. It is more likely to be slower than faster. It does cause more of a record to be in cache, but there is increased overhead managing each individual cache page. The cache pages for consecutive segments of a record (where a segment fills a cache page) are completely independent of each other. They are not stored in consecutive memory. And I/O is performed separately for each cache page. This configuration option should only be used for special circumstances with careful, realistic testing.

Previous Topic

Next Topic

Caching of Index Nodes

To understand FairCom DB’s caching of index nodes, it is necessary to review the relationship between the server’s PAGE_SIZE setting, the size of index cache pages, and the size of index nodes.

The server’s PAGE_SIZE setting determines the size of index cache pages and the size of index nodes for index files created by the server. Because the server must be able to fit an index node into a single index cache page, the server’s PAGE_SIZE setting determines the maximum size of index nodes supported by the server. The server can access index files that were created with an index node size less than or equal to the server’s PAGE_SIZE setting, but attempting to open an index whose index node size exceeds the server’s PAGE_SIZE fails with error KSIZ_ERR (40, index node size too large).

Previous Topic

Next Topic

Properties of Cached Files

Although caching data benefits server performance, it is important to be aware of the effect of caching data on the recoverability of updates. The state of a cached file after an abnormal server termination depends on FairCom DB options in effect for that file. Below is a summary of the effect of caching on each file type. The following sections discuss this topic in detail.

  • TRNLOG files: Caching does not affect recoverability. The server’s transaction processing logic ensures that all committed transactions are recovered in the event of an abnormal server termination.
  • PREIMG or non-transaction files: Caching can lead to loss of unwritten cached data in the event of an abnormal server termination. For these file types, the server does not guarantee a persistent version of the unwritten updated cache images exists on disk, so any unwritten cached data is lost in the event of an abnormal server termination.
  • WRITETHRU (applies to PREIMG and non-transaction files): To minimize loss of cached data, the WRITETHRU attribute can be applied to a PREIMG or non-transaction file. WRITETHRU causes writes to be written through the server’s cache to the file system (for low-level updates) or flushed to disk (for ISAM updates).

Previous Topic

Next Topic

Configuring Caching

The memory allocated to the data and index caches is set in the FairCom DB configuration file, ctsrvr.cfg, using the DAT_MEMORY and IDX_MEMORY options. For example, to allocate 1 GB data cache and 1 GB index cache, specify these settings in ctsrvr.cfg:

DAT_MEMORY 1 GB

IDX_MEMORY 1 GB

The server allocates this memory at startup and partitions it into cache pages. The server’s PAGE_SIZE setting determines the size of the data and index cache pages.

The FairCom DB Server’s data and index cache configuration options support specifying a scaling factor used when interpreting cache memory sizes. The supported scaling factors are:

  • KB: interpret the specified value as a number of kilobytes.
  • MB: interpret the specified value as a number of megabytes.
  • GB: interpret the specified value as a number of gigabytes.

You are always free to specify the exact numerical value as well. The following web page lists those keywords that accept the scaling factors and their limits:

Scaling Factors for Configuration Keyword Values

Per File Cache Options

In some cases it may be advantageous to turn off data file caching, especially for files with very long variable length records. Configuration entries of the form:

NO_CACHE <data file name>

result in caching for the specified data files to be disabled. Note that <data file name> may specify a wildcard pattern. You can specify multiple entries for more than one file.

Cache memory can be dedicated to specific data files using the following server configuration option:

SPECIAL_CACHE_FILE <data file name>#<bytes dedicated>

This option is useful for files that you wish to always remain in memory. For example, when occasionally reading other large files, this avoids required files being pushed out of cache.

For even more control, you can also specify percentage of total cache to be reserved for the specially cached files.

SPECIAL_CACHE_PERCENT <percentage>

This option specifies the overall data cache space that may be dedicated to individual files.

Priming Cache

The FairCom DB database engine optionally maintains a list of data files and the number of bytes of data cache to be primed at file open. When priming cache, FairCom DB reads the specified number of bytes for the given data file into data cache when physically opening the data file.

Data files are added to the priming list with configuration entries of the form:

PRIME_CACHE <data file name>#<bytes primed>

The <bytes primed> parameter accepts scaling factors (for example 2GB).

Example: The following keyword instructs the Server to read the first 100,000 bytes of data records from customer.dat into the data cache at file open:

PRIME_CACHE customer.dat#100000

A dedicated thread performs cache priming, permitting the file open call to return without waiting for the priming to be completed.

Use PRIME_CACHE with the SPECIAL_CACHE_FILE keyword to load dedicated cache pages at file open.

To prime index files, use configuration entries of the form:

PRIME_INDEX <index file name>#<bytes primed>[#<member no>]

If the optional <member no> is omitted, all index members are primed. If an index member number is specified, only that member is primed.

For example, the following keyword instructs the Server to read the first 100,000 bytes of index nodes in customer.idx into the index buffer space at file open:

PRIME_INDEX customer.idx#100000

The nodes are read first for the host index, and then for each member until the entire index is primed or the specified number of bytes has been primed.

The following example restricts the priming to the first member (the index after the host index):

PRIME_INDEX customer.idx#100000#1

The <data file name> or <index file name> can be a wild card specification using a ‘?’ for a single character and a ‘*’ for zero or more characters.

FairCom DB also supports priming the data cache in forward AND reverse order by index.

PRIME_CACHE_BY_KEY <data_file_name>#<data_record_bytes_to_prime>#<index_number>

For example

PRIME_CACHE_BY_KEY mark.dat#100000000#-1

Primes up to 100,000,000 bytes from mark.dat reading by the first index in reverse key order.

Control of Cache Flushing

By default, FairCom DB creates two idle flush threads at startup. One thread flushes transaction file buffers, and the other flushes non-transaction file buffers. The threads wake up periodically and check if the server is idle. If so, the flush is begun. If subsequent server activity causes the server to no longer be idle, then the flushes are terminated. Low priority background threads, such as the delete node thread, do not affect the server’s idle state. FairCom DB clients and FairCom DB SQL clients and checkpoints do cause the idle status to be modified.

The configuration file can modify the characteristics of these idle flush threads:

IDLE_TRANFLUSH <idle check-interval for tran files>

IDLE_NONTRANFLUSH <idle check-interval for nontran files>

The idle check interval values are specified in seconds. Setting the interval to zero or a negative value disables the thread. The defaults for these intervals are each set at 15 seconds.

TOCIndex