Last updated: 14 Sep 1998
Well, it probably depends on what you call big, the design of your database, and what hardware you have. People have reported databases of 3 or 4 million records, although at that size running summary reports and finds may take a while! Nobody has ever reported reaching a maximum number of records limit in Approach. In theory it should be able to handle the maximum number of records you can put in a dBase IV database, which is 4294967296 (4 gig = maximum 32 bit number).
In order to maintain the best ratio of speed and stability you may want to consider deleting the .adx files (indexes) on a regular basis (say every week or two). There are some possible complications with this so carefully read "Deleting and recreating index files" first.
It has been suggested that the number of joins has a far greater impact on the speed and stability of large databases rather than simply the number of records. One subscriber stated that they found a database having "80,000 records and approximately 20 tables joined" to be unusable slow, and suggested that a database of "100,000 records if there are more than 4 or 5 joins" to be a maximum usable size. Meanwhile somebody else replied saying that they had "databases exceeding an average of 400,000 records up to 2.2 million records and joins exceeding 10 databases with Approach 96 having no problems handling the data. "We have been doing multi field queries in an acceptable time frame and once the smart indexes are created, finds on fields are just as fast on a (database) with 2.2 million records as a find on a field with 400,000 records. Our standard hardware is a P5 166 with 64 meg of ram. On a P6-200 with 64MB RAM, doing a complex search on 4 fields takes 15 sec."