Product Documentation

FairCom ISAM for C

Previous Topic

Next Topic

Scanner Cache

Note: The LRU algorithm is no longer used for the data cache.

The index cache pages are managed using a least-recently-used (LRU) scheme such that the page that has not been touched the longest will be the page selected for re-assignment when a page is required to satisfy a current request. For indexes, this simple strategy works very well since traversing an index tree frequently requires access to the same, important high level nodes (such as the root node) and/or to the “current” leaf node.

For data files, the LRU scheme is not always effective. In particular, when data file access or updates involve many different records with little or no revisiting of records once they have been processed, the LRU scheme can result in many cache pages to be assigned to recently accessed records. But, at least from the user’s perspective, there is little chance of revisiting these pages.

Consider simply traversing a large set of records using NXTREC()/NXTVREC(): it would be possible to exhaust the cache with the contents of these records; but if no access beyond the primary access is required, these cache pages may be wasted. Even when we consider adding a large number of records, especially to the end of a file, this presents the same problem.

Using the advanced FairCom DB header mode, ctSCANCACHEhdr, allows an alternative caching strategy. The scanner cache strategy uses all the standard cache routines except when requiring a cache page (that is, FairCom DB has not found the data block it needs already in cache), it uses one of two dedicated cache pages.

Example

The following call will enable the scanner mode of the file, specified by the file number datno, to use the scan cache strategy:

PUTHDR(datno, YES, ctSCANCACHEhdr);

The scanner only goes to the LRU list if it has not yet dedicated both cache pages. Once dedicated, these pages may be referenced by other users, but they are not placed back on the LRU until either the file is closed or a scanner mode is toggled off.

Note: The strategy only applies to the calling connection's access to datno; each connection controls its own data cache strategies for the files it opens.

The significance of this strategy is that a large set of data record operations can be performed with only two cache pages with little or no decrease in performance compared to no restrictions on cache usage. Moreover, such restricted use may help other aspects of system performance since the bulk of data cache can be used for other files and/or other users. A large traversal of the data file will not replace other cache pages.

Two dedicated cache pages were chosen since two pages are always sufficient to hold the beginning and end of a record image which is typically all that is stored in the cache. A short record can always fit in one or two cache pages, and a long record has its interior (i.e., that portion which does not fit within the two pages) read or written directly (regardless of scanner mode unless MPAGE_CACHE has been enabled), bypassing the cache.

Particular Cases

SPECIAL_CACHE_PERCENT

The previous dedicated cache strategy uses the keyword SPECIAL_CACHE_PERCENT to specify the maximum percentage of the data cache that can be dedicated. This percentage defaults to 50%. If the number of dedicated cache pages is already at this limit, then the scan cache approach will not take effect until the total number of dedicated cache pages falls below the limit.

SPECIAL_CACHE_FILE

The SPECIAL_CACHE_FILE list of files with dedicated cache does not affect the scan cache strategy: a user can call PUTHDR() for any data file whether or not it is on the SPECIAL_CACHE_FILE list. The SNAPSHOT statistics for special cache pages include dedicated scan cache pages.

TOCIndex