Our project has an independent team that does testing. Sometimes they need help reproducing the problems that get fixed. I just sent them a performance enhancement for a very slow app. They tried to duplicate the problem. But they just did not have enough data. So I gave them a script which created lots of data.
The tester got back to me and said the script was not working. He tried to run it a couple times. But it kept aborting after 10,000 rows were added. I got his database login and tried running the script myself. Same problem. This was strange because it worked every time in my development database.
The weird thing about the problem was that the error was not ocurring directly in my script. Instead it was failing in an audit database trigger. The problem always happened after the 10,000th row was inserted. I looked at the source code of the trigger. The code obtained a unique number using a database sequence. It then used the unique number as the key to insert records into the audit table.
It took a while to come up with the test that exposed the source of the problem. I checked all existing keys in the audit table. And wouldn't you know it? Some of these were higher than the numbers coming back from the database sequence (which was supposed to provide new unique numbers). After cycling the sequence past the duplicate keys, the script ran fine.
As a follow-up, I had a talk with our DBA Team Lead. Wanted to ensure this could not happen in Production. He told me I should have sent the problem in the first place. He knows I have a lot of important things to do other than debugging database issues. I will take him up on the offer soon.
A Little Bit of Crypto - I have been trying to figure out to "collision resistant" some of these standard hash functions are. It is a tough concept to get my head around. I figure...