Our flagship application has the ability to query many records in the database. We limit the query results to 10k. However the user can decide to get unlimited extra batches of 10k records. When the user wants to view the details of one of those records, we load in a little information for all the records.
Part of the record is a set of subrecords. The way the memory layout was done was to include a fixed size array within the main record. Initially this worked because the maximum number of subrecords were capped. However that maximum was recently opened up. The result was that some queries resulted in an out of memory error. Ouch.
A lot of people pitched ideas on how to solve this problem. However those people were not famliar with the code. We have a lot of C-style code that accesses the array of records, as well as the subarrays of subrecords. Plus the database code is Pro*C, which is in effect the C programming language. You can't do things such as just stick an STL vector in there.
I gave this problem some thought. After I was assigned the task of solving the problem, I figured the simplest solution was best. Change the record structure to have a variable list of subrecords, whose size is dictated by the actual amount of subrecords. This is effective use of the memory we allocate. The implementation of this was a little tricky.
The app is a mix of C and C++ code. Some of the memory is allocated with a call to the C malloc function, while other memory uses the C++ operator new. These techniques are similar, but you should deallocate with the corresponding technology such as a free or delete. In the end, the only thing that caught me off guard was the application's subtle dependency on there being an empty unused subrecord memory be preallocated. Luckily a sharp coder can handle all such nuances.
Newbie Gets Confused
-
A relatively younger developer got tasked with doing some performance tests
on a lot of new code. First task was to get a lot of data ready for the new
c...