Performance Optimization - Page Size Tuning

Index Page Size

FairCom Database Engine has a default page size for its b-tree indices. Each page holds one or more index entries.

Changing the index page size is a major maintenance task and should be done carefully after a full, reliable backup. The procedures below represent the best practices for this operation.


Operating systems have a default file block size they read and write data in. Most recent Windows operating systems are 4K. Other operating systems will vary. Setting the index page size to match the OS or multiples of OS file block size can improve I/O performance.

FairCom Database Engine supports setting PAGE_SIZE from 512 bytes up to a maximum of 64K (65,536).


You should set your c-tree server PAGE_SIZE to at least the OS file block size, nothing smaller. For example, if you set c-tree's PAGE_SIZE to 2K, then on Windows you are reading 4K at the OS level and throwing away 2K every time.

For most applications, 8K is a fairly good general rule of thumb. If you are looking to fully optimize your application, and if your files are fairly large (see below), you might gain performance by increasing this setting, in multiples of your operating system page size value.

It is difficult to define "fairly large files" because of all the different ways c-tree provides for accessing your files. The best method for tuning this value is to try a few different sizes with your precise application and a copy of your data. For example, increase from 8K to 16K and if your performance increases, then try 32K. Once your performance starts dropping, then scale back in smaller increments looking for the sweet spot for your application.

Rebuilding Files

Recommendation: If you are considering changing your PAGE_SIZE setting, be sure to rebuild ALL of your index files.

If you are reducing your PAGE_SIZE setting, and you miss rebuilding an index, and its value is set larger than your current PAGE_SIZE setting, then you will get an error 40 (KSIZ_ERR, Index node size too large) when this file is opened and it will need to be rebuilt to resolve this error.

Recommendation: Experiment on copies of your data and server folder. When you are confident with the results, back up your source data and then change it.


  1. Shut down the ctreesql server, start it again, then shut it down again to be sure all logs are flushed and a clean final checkpoint has been written.
  2. Make a copy of your server folder including the data folder.

    Move this copy to a different machine from your live server. Keep a clean copy in case you need to start over.

  3. Edit ctsrvr.cfg and add a PAGE_SIZE keyword, for example:
    PAGE_SIZE 32768.

    The size should be a power of 2. The maximum page size is 64k (65536).

  4. Go to the "data" directory in the same directory that the server runs from. Delete all log and start files, i.e. all files that start with L00….FCS, S00….FCS, D00….FCS, etc. With a clean shutdown you should not have any I00….FCS files in the directory.
  5. Rebuild the three superfiles used by the server using the ctscmp utility including the <sect> parameter. Remember that the sect size is the page size/128. For example, if your page size is 32768, use a sect size of 256:

    ./ctscmp <server directory>/data/ctdbdict.fsd 256

    ./ctscmp <server directory>/data/ctreeSQL.dbs/SQL_SYS/ctreeSQL.fdd 256

    ./ctscmp <server directory>/data/FAIRCOM.FCS 256

  6. Delete the current index files for your data files.
  7. Rebuild the indices for ALL data files using the ctrbldif utility including the -<sectors> parameter.

    ./ctrbldif (server directory)/data/ctreeSQL.dbs/MyData.dat -256

  8. Restart the server and verify no errors were logged in <server directory>/data/CTSTATUS.FCS.
  9. Connect to your data files and verify no errors were returned.

See also: