Product Documentation

Knowledgebase

Previous Topic

Next Topic

OLTP Solutions

This document provides guidance in selecting optimal hardware for a high-speed transaction processing environment utilizing the FairCom DB, FairCom's high performance database technology. Performance metrics from real-world implementations are also presented to demonstrate the transaction processing power available from FairCom Database technology.

In This Chapter

Hardware Component Sizing

FairCom DB Customer Performance Metrics

Previous Topic

Next Topic

Hardware Component Sizing

Database performance is dictated by a number of factors including:

  • Architecture of the application
  • Composition of the data
  • Hardware underlying the system

Because there are multiple factors involved, there is no single answer to the question of what hardware will yield the fastest application performance. However, the usual best practices for optimizing data intensive applications mostly apply to tuning a FairCom DB hardware environment. The primary computer hardware subsystems to be cognizant of when selecting hardware for a high speed FairCom DB installation are:

  • Disk I/O subsystem
  • Network infrastructure
  • System memory
  • Number of CPU's
  • CPU clock speed

Because FairCom DB is a very efficient database engine, the baseline hardware requirements are minimal - far lower than with most enterprise-level database technology. Yet as more resources are available, FairCom DB can be configured to use those resources, making it very scalable.

The best mechanism for properly sizing a system is to perform a real-world test simulating peak numbers of concurrent users, record sizes and data flow. Then, using operating system tools and tools provided with FairCom DB (e.g., Snapshot and server administration utilities), one can observe where the performance bottlenecks are occurring. Determinations can then be made as to whether adjustments to the hardware configuration or FairCom DB configuration are warranted. FairCom has extensive experience in this regard, and our technicians may be able to engage with the application development team to help identify and overcome bottlenecks.

Absent the ability to perform such a real-world simulation, most transaction-laden applications tend to be I/O bound, usually within the disk subsystem. Therefore FairCom customers tend to focus their hardware budgets on first optimizing the disk subsystem. Items such as large memory caches on the I/O controller, fiber optic backplanes, high RPM drives, etc. tend to be wise investments. Additionally, storing FairCom DB transaction logs on a different hard drive spindle than the data and index files can also provide a data throughput improvement.

The network infrastructure is also performance sensitive and the network should be sized to support the peak data throughput requirements without developing any latency.

FairCom DB memory requirements can be reasonably approximated with the following equation:

Base line Memory =

Server EXE size + 1 MB +

Communications Library size (if applicable) +

Data cache (DAT_MEMORY) +

Index buffer (IDX_MEMORY) +

(900 bytes * Number of files (FILES)) +

(325 bytes * Maximum number of connections (CONNECTIONS)) +

(4 bytes * Maximum number of connections (CONNECTIONS)

* Maximum number of connections (CONNECTIONS)) +

(16000 bytes * Number of actual connections) +

(256 bytes per connection per file actually opened))

FairCom DB is able to utilize very large data and index caches (tens to hundreds of gigabytes each). In addition, the cache subsystem is very granular so the system can be precisely tuned to optimize the data throughput. For example, one can elect to cache an entire file, cache a percentage of a file, exclude specific files from the cache, etc.

FairCom DB is able to automatically scale across multi-CPU systems. Although the CPU requirements for FairCom DB are a fraction of other database engines, multiple CPU's can provide performance benefits. Furthermore, systems can be architected such that the FairCom DB is run on the same machine as the core components of the application in order to minimize network traffic.

Previous Topic

Next Topic

FairCom DB Customer Performance Metrics

FairCom DB technology is a popular choice for any marketplace that deals with a large volume of data and requires real-time data access. Two such markets are the Financial and Telecommunication markets. In these markets, FairCom technology has been implemented in applications such as:

  • High speed data repositories for stock markets the world over
  • Back-end processing of market analysis data by investment companies
  • Enterprise level data switches for credit card companies
  • High speed phone number lookup and phone usage information for telecom companies
  • Fraud detection analysis

Typical hardware configurations in these markets are mid-range enterprise boxes from manufactures such as Sun Microsystems, IBM, Hewlett Packard and Stratus. Machines typically have from 4 to as many as 48 CPUs, many gigabytes of RAM, and state of the art disk array subsystems from manufacturers such as EMC and HP. The disk subsystems are typically NAS or SAN devices connected to the network via fiber channel and utilize the latest in disk array stripping technology to provide the appropriate blend of redundancy and performance.

FairCom metrics that we see in these markets across an entire system (which may include multiple FairCom DB installations) are:

# of Files

File Sizes

Total Data Storage

400-30,000 files per application

100+ GB per single data file

4.3 TB-10 TB

Data Volume/Day

Transaction/Day

Transaction/Second

20GB - 1,000 GB per day

25 million - 300 million per day

400 - 100,000

For example, one high-end financial application was tested on a Stratus 5700 machine, running Red Hat Linux with 4 Dual Core Xeon 2.8GHz processors, on an internal ATA disk drive with a single FairCom DB operating at a sustained rate of 20,000 transactions per second.

TOCIndex