Product Documentation

FairCom Database Backup Guide

Previous Topic

Next Topic

Back Up as Segmented Files

The FairCom Server and the ctdump and ctrdmp utilities support dynamic dumping of segmented files and the creation of segmented (stream) dump files.

Segmented dump files are different from the !EXT_SIZE feature that automatically breaks the dump file into 1GB ‘extents’. Dumping to segmented files allows you to take advantage of huge file support and to specify the files size and location for each dump file segment.

  • To dump segmented data or index files, simply list the host (main) file in the !FILES list and the segments will be managed automatically.
  • To cause the output dump file produced by the dynamic dump itself to be segmented, use these script entries:

!SEGMENT <size of host dump file in MB>

<dump seg name> <size of segment in MB>

...

<dump seg name> <size of segment in MB>

!ENDSEGMENT

The host dump file is the file specified in the usual !DUMP entry. Only the last segment in the !SEGMENT / !ENDSEGMENT list can have a zero size specified, which means unlimited size.

For example, assume bigdata.dat is a segmented file with segment names bigdata.sg1, bigdata.sg2, and bigdata.sg3, and the index file bigdata.idx is not segmented. To dump these files into a segmented dump file, use the script:

!DUMPd:\bigdump.hst

!SEGMENT 50

e:\bigdump.sg1 75

f:\bigdump.sg2 0

!ENDSEGMENT

!FILES

bigdata.dat

bigdata.idx

!END

The host dump file is up to 50 MB on volume D:, the first dump file segment is up to 75 MB on volume E:, and the last dump file segment is as large as necessary on volume F:.

TOCIndex