Backup

When I joined my company, I was issued a laptop to read company e-mail and enter my time card. The laptop was an IBM ThinkPad. I got a carrying bag with it too. I was given a user name and password to log into the company network. For the rest I was on my own. It didn’t take long for me to realize that the files on my laptop were periodically backed up to some network storage. I did not do anything to set this up. It came from the company configured to do this automatically. The backup is pretty unobtrusive. I like that.

To convey how transparent my laptop backup is, I will confess that I had no idea how the backup was happening. I did not know the software used to do the backup (I have since checked and found that it is DataConnector by Connected Corporation). Nor do I know where my files are getting backed up to. I assume that they are getting copied somewhere on the company network where they are safe.

Like most modern backup products, I also assume that it is a smart backup. That is to say that the software knows which files got changed since the last backup. And it only copies the newly changed files each time it does an incremental backup. I can only surmise this because I have put a lot of files on my laptop. However the backups do not seem to take very long.

The whole laptop backup story is an example where my company is doing well to take care of me as an employee. Somebody has taken care of the important detail of backup so I can concentrate on solving problems for my company’s clients. If only everything where this simple and easy, corporate life would be grand.

Schedule Screw Up

My company works on a contract where the amount of money made depends on meetings the schedule. This schedule is a huge Microsoft Project Plan. Management on the project has been telling the staff that it is everyone’s responsibility to make sure they understand the schedule, identify which tasks on the schedule they are responsible for, and ensure we meet the deadlines on the schedule. On the surface that seemed fair enough. So I studied the schedule and found which items I was solely responsible for.

The schedule is not the only thing that dictates what work gets done. Change requests get individually costed, approved, and worked. I found one huge change request that I had significant schedule tasks to complete. However this set of tasks was dependent on the customer making a whole lot of decisions. So I told the top manager that the schedule could not be met based on delays in the customer organization. I was then told that, for this particular set of tasks, the schedule was not accurate. I was assured that the schedule dates for these tasks would be modified later.

As the customer made decisions, a new scheduled was proposed for the tasks assigned to me. My manager asked if the proposed dates could be met. I told him I could make them if I could work exclusively on this task. He put me on it full time. My manager said he would change the master schedule with these new dates. For a second I thought things were coming together. Next thing you know I go to a meeting held by the big project manager. I was informed that management did not like to see tasks behind schedule like the one I was working. That came as a huge surprise since I was hitting all the dates agreed upon between me and my manager.

I guess one problem here was that nobody was giving me a copy of the master schedule. So I got a copy from another manager. Wouldn’t you know it? The master schedule had all the original incorrect dates for my tasks. No wonder it appeared like I was late on all my tasks. I wasted no time and called up a bunch of managers on their cell phones and told them what was up. They said it would be taken care of by my manager. Somehow my manager got busy. So I had to gather up all the evidence of the screw up, sit down with my manager, and force him to fix each of the dates in the master schedule. Hey I can do project management. But I don’t like to. And it is definitely not my job. By pushing schedule responsibility to each individual on the project, it has become my job. Yes this is probably nothing new in the software development world. I do not have to like it though.

Presentation Day

Today the application development team presented our documentation changes to the customer community. Murphy’s law was in full effect this morning. The development manager tried to print out a bunch of handout copies on the color printer. The thing was not working. He somehow got a black and white copy printed. A developer ran to the copy room to make black and white copies. Then a bunch of us went up to the presentation room. Another customer group was in there and said they have a standing reservation for the room. We took a hike.

So we find a replacement room. I got a lot of flack this morning because I wore my best suit and tie. I also made sure I went over my part of the presentation many times. Not knowing your material is a sure way to ensure you get real nervous during a presentation. I was impressed when our team lead started off the presentation. He had a very confident way of speaking. He appeared to actually talk with the audience. And you could tell that he knew his stuff. There were a couple concerns from the customer regarding the security for the new stuff. And some diagrams in our documentation were not clear. But the first part went pretty smoothly.

Then it came to my turn. I was glad that I had practiced what I was going to say. Like my team lead, I just used the pages of the presentation as reminders about what I wanted to say to the audience. Luckily I have spent many years on this project. So I was able to speak at length on the existing functionality, along with the changes we were proposing. I answered a couple questions on some details from the audience. When I neared the end of my part, I asked another developer to take over. He said that I was doing such a good job that I should just do his part. Luckily I had practiced his part of the presentation. I glossed over the rough parts of this guy’s documentation. There was no problem faking familiarity with that piece.

After I completed my section, I handed the mic over to our new Visual Basic developer. Initially I had thought that this guy’s piece was the easiest. But the customer community seemed to grill the poor guy. Our manager tried to jump in and save him a couple times. He just could not stay on course for his presentation. I do not think this is due to his personality. He does have a strong accent. And he does not know much about our system. Somehow the guy made it to the end of the presentation. Everybody breathed a sigh of relief. We all got a congratulatory e-mail from upper management for a job well done. I think our next customer presentation will go even better.

Parent Child

Previously our project had a main application with a lot of code. It allowed the user to do all kinds of queries, among other things. The application was getting unwieldy. So my team lead thought it would be good to distribute some new functionality in separate applications. However any new applications needed to integrate with the main app. Therefore I took on a task to split out some of the existing functionality from the application to a new stand alone application. The goal was to leave the user experience the same, while prototyping ways to interface with the main application. I decided to choose of the many queries to split out.

Some of the challenges with integrating the stand alone and main applications included dealing with cached data in the main application. However I put together an API for any new apps to gain access to the main app cache. There was also the required functionality to reproduce in that the query window needs to go away when the main application is closed. I wanted to make sure this always happened. Otherwise the users might get suspicious if an orphan window stuck around when the main app went away. In addition, the new stand alone app needed the main app to work.

Therefore I implemented a two phased approach to ensuring orphan children did not stick around. Before the main application shuts down, it will send window messages to the child stand alone apps, telling them to exit as well. This seemed appropriate. However I also wanted to handle the special case where the main application aborts or is killed. In that scenario the main app does not get a change to inform the child apps to quit. Therefore I also implemented a polling functionality in the child apps. They ping the main app every 2 seconds via windows message calls. If they do not hear from the main app, they assume it has died and they end their processes.

By now I had figured that I had a solid design. The parent or child applications may abort due to exceptional circumstances. However there did not seem to be any way for there to be lingering orphan children applications sticking around. This is why I found it strange that a maintenance developer thought that a particular error encountered by the user was due to the fact that the main app had died causing problems for the stand alone app. While I agreed this might be possible, it was highly unlikely. I informed this maintenance developer that he had better go back to the chalk board and try to figure out what was really going on. Solid design is, well, solid.

Grid Prototype

Currently I am working on the design for the new features we are adding to our application suite. Requirements had been collected for a certain feature the users want. However I could not decipher the requirements. Neither could the requirements team. They pretty much wrote down what the users said without understanding the details. So I scheduled a meeting with the users to pick their brain on exactly what they were looking for. At the meeting I started to ask probing questions. I was started to get a little feel for what we needed. Another developer piped up that he knew exactly what the customer wanted, and had even started a preliminary design. He told me to come see him for the details.

This caught me by surprise. The other developer works for another company. He does some of the back end work for our project. I know him to be a thorough type of guy. So I trusted his understanding of the problem. I scheduled a time to get with this developer to find out what he knows. When the time came to finally meet him, he had a one page document with a lot of details. He had sketched up a mock user interface. He also had designed a database table to store the data. I asked him a few questions about the data types. Then I was ready to go produce the real design document for the client side applications.

One of the things that our users like to see in design documents are user interface mock ups. I wanted to oblige them for this specific change. The mock up from the developer I met showed a spreadsheet. So I planned to put a spreadsheet in my user interface prototype. My first instinct was to show a list control in report mode. It ended up looking like a spreadsheet. But then I wondered whether it would be easy for a developer to allow and track the user input from the cells. The requirements also called for some cells to be removed. This seemed a little much for the standard Microsoft list control.

It was getting late and I needed to stick to the schedule. So I decided to go with the simplest design that would allow the least experienced developer to code up. Therefore I created a dialog prototype which had a bunch of edit controls lined up in a table format. They look fine when aligned correctly. Edit control gave me ultimate flexibility in layout and customization. And I figure any developer should be able to work with an edit control.

Database Lockdown

We have some developers who work on the UNIX platform. The box is from Sun and runs Solaris. There is an Oracle database installed on the server. Developers use database accounts that are externally identified. That way they can authenticate once at the operating system level. They can then run their programs which call SQL*Plus and not need to re-enter the password. The access is externally identified. This setup stopped working this past week. The developers pushed the problem to the DBA Team.

After a bit of investigation, the DBA Team found that the security people disabled the ability to connect via an externally identified account. The lead DBA was not happy. He let the software development manager deal with the security folks. The problem is that our DBAs are not the true DBAs for the databases we use. They have to go through the security people that actually manage the databases.

This is a very unusual setup. We are talking about development databases that have fabricated data. Yes there are some sensitive programs running on these boxes. But it is not that code which is locked down. It is the actual database (that has the dummy test data). Our DBAs are just our interface to the real DBAs from the security team. They have to make calls and send e-mail requests to the people with the real power.

The outcome of this backwards arrangement is that the whole team is at the mercy of the security folks. When they make and enforce new policies, the dev team suffers. At this point it is up to management to fight the security Nazis. Some developers are in work stoppage status right now. I feel their pain. It seems these problems happen once a year. Is somebody over there in security getting bored and having fun at our expense?

Google Employment

In the old days, a programming job at Microsoft was the dream job for developers. Employees were rumored to work hard. But they were compensated with stock options that would make you a millionaire. It also helped that Microsoft made the desktop software that everybody knew about. I confess that I myself went to a local interview to work for Microsoft consulting.

Microsoft may have lost some of its shine during the dot com era. I think for a while Google had overtaken Microsoft as the premier employer for developers. Microsoft stock may have gone up over the long years. But Google stock rocketed up into the hundreds of dollars in a couple years. In addition, the unofficial motto at Google was to “do no evil”. So not only would you be potentially making millions from your Google stock options, you would not have to sell you soul to the Evil Empire.

This past quarter Google reported some softness in their earnings report. Yes they continue to grow. But maybe they are not growing at the old rocket-charged Google rate. A hint that things may be slowing down at Google is their new hire rate. This past quarter Google hired 448 new employees. This brings their total head count to 19,604. And compared to previous quarters, this accounts for a lot less new hires.

I imagine you have to be very good to get a job at Google these days. They have always been selective. But now, perhaps due to economic bad times, they are even more selective. So the smart thing to do may be to find out who the next Microsoft or Google would be. It would be easier to get in on the ground floor of the next great software development powerhouse. The million dollar question is what is the name of that company. MySpace? Facebook? Some little company that is just getting started?

Tech Jobs

This weekend I read a relevant article from Information Week magazine entitled “Tech Jobs Show Surprising Strength”. The employment numbers were in from the last quarter. And the article reviewed the important stats affecting the IT community. Overall IT jobs were up 2% from the previous quarter which should be good. However there was worry that job growth may be slowing for the future. There appears to be a huge recent pressure to cut back on IT salary spending.

IT employment was hovering at a nice 2.2%. There was a lot of good news for managers. IT management jobs were up a whopping 16% from last year. Unfortunately programmer jobs fell 4% from the previous quarter. This last statistic was the one that I paid the most attention to. Now I did not get to see the raw statistics. I only viewed the article which summarized the high points from an IT industry perspective. But I usually fear any bad news for the software development profession.

You often head a lot of complaining about the outsourcing of software development jobs to other countries. Along with this there is grumbling about the import of programming talent under the H1-B visa program. I do not know whether any of these jobs contributed to the statistics being summarized in the article. Somehow I do not think so. Therefore, if the article has bad news for programmers, the effect may be compounded by a loss of jobs to foreign interests.

The one thing I am pretty confident about is that the programming job function will not go away. I seriously doubt there will be an automated bot that can do my job any time soon. And I do not think anybody can do the work I do. Yes you can train a non-programmer to slap some controls on a form. But when things go wrong with more complex systems, companies will always need an expert to step in and save the day. Maybe this is the way developers need to position themselves. Find out which functions cannot be replaced. Then gain expertise in those areas to keep your job safe.

Screen Resolution

The applications in our system require a screen resolution of 1280 x 1024. They also require a certain font size for everything to work. We document these settings in our computer operator handbook. Almost all developers and testers run the application on a screen with the wrong resolution. The result is that the application operates in a degraded mode.

Recently we upgraded the development tool set, and modified our code to work with the new tools. The test team was tasked with validating that we did not break anything. As usual they were conducting their tests with the wrong screen setup. So we started getting all kinds of false positive trouble tickets. Static text on the screen was getting whole words clipped. Or sometimes portions of words were getting clipped. The test team kept concluding that we were misspelling words. This is laughable. Many times the problem tickets would come to me. The first question I always asked was about they screen resolution. And the normal response was, “Oh yeah”.

I have been pondering what the best solution to this set of problems is. Our customers almost always follow the handbook and set the correct screen resolution. So they do not have any of these problems. Perhaps one solution is to set a policy where all staff (especially the test team) needs to set their screen resolution correctly. A more dramatic solution would be for the application to refuse to run if the resolution was not set correctly. However this seems a bit extreme. The application can run, albeit in a degraded mode, when the screen resolution is not set correctly.

The best approach might be to modify the application to work correctly in any screen resolution. This might seem like a great idea. However this will take a lot of time and testing. We operate on a maintenance contract at my work. Therefore this task will effectively cost a lot of money. Our schedule is tight. And the problem does not seem that important in the big scheme of things. So I will probably not have the bandwidth to tackle this huge problem like this. But I can try to make small improvements whenever I can. That’s what its all about in the end.

Customer Contact

Our requirements team received a request to cost some changes in the customer’s organization. Since the members of the requirements team are new, they asked the development team for some help determining how these changes were going to affect our system. The only person who responded was a database administrator. He took a couple important keywords from the request, and searched the database for column names which matched the keywords. This did not seem to help the requirements team any.

So the requirements team once again reached out to development for assistance. At this time then sent a request for help to me. I looked at the search the DBA conducted and determined that it was of no use. Therefore I asked the requirements team to provide me everything the customer had sent them. I took a few minutes to review the documents, and determined that these changes affect a lot of systems in the customer organization. However it had no impact on our system.

Now I know I have been working on this project for a while. So I have deep domain knowledge. But there are many members of our client organization that have worked on the project much longer. Any one of them could have easily determined these changes have no impact on our system. Why didn’t the requirements team reach out to them? Maybe they did not want to appear dumbfounded to the customer. But I thought having a good relationship with the customer was the job of the requirements team. Yes I can do requirements work. But then why do we need a requirements team in the first place?

Most of the information I have seen from the requirements team adds little value to determining what our customer actually requires. The team takes documents authored by the client and formats them to make them requirements documents. There is little to no analysis. When I ask them questions about what they have reformatted, I frequently get blank stares. Something is definitely not right here. I feel like I am working with people who are churning out a lot of words. But the words have no meaning to them. They are skilled at the art of generating output. Its just that they have no clue what any of it means. Are we doomed or what?

Security Slouch

There are 3 big potential changes that are to be made to the applications for next year: must have changes to make the programs work, desired modification that the customer would like, and changes to meet security policies. The work surrounding the must have changes are not negotiable. We must make these changes for the program to work. The desired modifications are extra and are scheduled separately. It looks like we are going to do these mods for next year. The third category is security changes to meet the security policies of our customer. The client has given the authorization to charge extra to make these changes. However our team is fully engaged. So our management informed the customer that we would try to get these changes in some time. But that time might be a long time in the future.

This is so typical of software development. The security policies have been in place for a long time at our customer’s organization. The prior contractor that did the maintenance for this system implemented some but not all security changes. I guess they dragged their feet. Now it seems that my company is doing the same thing. It is not that our company is doing this intentionally. They are just trying to ensure that we do not sign up for something that we cannot deliver. I am sure we could exchange some of the new features for some of the security based changes. But the end users do not care as much for the security changes. They want the new stuff they have requested.

This mentality is no good. I suppose this is how many systems go down the slippery path and find out that their system was hacked. There is a lot of documentation back and forth based on the decisions made for what we will do this year. So I think our company has set themselves up in a good light legally. But everything will get crazy if any of the security changes that were supposed to be made get skipped, and the system gets compromised. Our client deals with some highly sensitive data. I bet it would be a major event if the system got hacked or anything.

When are we going to learn to give the important items the attention they are due?

Doing Design

I have been tasked with designing the new features we shall be implementing for the next release of our software application suite. There is a lot of flexibility in this task. The main principle I have been following is putting myself in the programmer’s shoes. I ask myself what information I would personally have liked to be done at the design stage. That is what I strive to determine and document at the design stage.

So far I write in English the process logic I expect to be followed. This gets a little technical. It drills down much deeper than a business or even a system requirement. I also go further and write up some pseudo code. This is where I get very specific and call out table and column names from the database. These steps are no brainers. I do them for every change I design.

I also do user interface mock ups and add them to the design document. This will help us communicate the design to the customer. It will also help keep development and the customer on the same page. This part does take a while do to. I need to actually code up a prototype user interface to generate the screen shots. But I think it is worth it.

Finally I go to traditional design techniques like generating class and sequence diagrams. These might be useful to a junior developer that does not know how to knock out classes. And a sequence diagram might be good for those situations where we need to integrate the new code with a lot of existing code. I can lay out all the existing call stacks and show where the new stuff is supposed to fit in. However I sometimes feel that this is overkill if I am drilling down the new stuff in such a detailed way.

Perhaps correct design is supposed to do all the work so that coding is a mundane task. This certainly will reduce risks at the coding phase. But it shall also decrease the fun during the coding phase. It might be necessary based on the very short duration for the coding phase of some changes this year. But in one change I purposefully left out a lot of the new code details. This was done in good faith. I figure that the portion that accesses the database should be coded up according to the skills of the developer. If they know Oracle Pro*C well, this is what they should use. However if they are good in OLE DB, I do not want to tie them down to Pro*C at the time of design.

FTP Lives

I read a lot of blogs related to software development. Previously I used to find the posts through Reddit. Now I find myself locating blog posts through Y Combinator News. Today I read a rant by Steven Frank arguing against the use of FTP. The introductory paragraph cited the technology as being 23 years old, and therefore obsolete. Apparently the guy got ripped by the feedback he has received from this viewpoint. Maybe that is a good thing. Controversy sometimes gets you readership. Initially I also thought this guy was off base by discounting the technology due to its age. Steven has since tried to explain himself and clarify the true reason why he advocates dropping FTP.

There is some legitimate concerns about FTP. It is not secure. We use FTP all over the place in our production system at work. However we are replacing the transfer technology with one that is truly secure. It is a variant of Secure FTP. It will take some time to implement the changes. Changing our own code to initiate the secure method is easier than getting the feeders of our system to change their code. Luckily we have some security mandates that we have to meet that require the disuse of regular FTP.

Now FTP will still have some uses. If you do not have security concerns, then FTP might be the right tool for the job. It does not matter how old the technology is. It does not matter when the technology was last updated. The technique is valid if it does the job and meets the requirements. We developers always like to jump on the latest technology. It is in our blood. And it seems more prevalent among the younger generation of programmers.

As I was meditating on the complaints about FTP, I had to chuckle a bit. I wonder if the opposition has heard of Trivial FTP (TFTP). Maybe they would go on a rampage against it if there was continued widespread use of it as well. TFTP is 28 years old. And it totally lacks authentication. Yeah. We better not tell them about TFTP. It could cause a ruckus.

Install Assumptions

This year we upgraded the tools used for application development. As a result, the build and install scripts needed to be modified. One would think that this would have been the easy part of the upgrade. However we are finding some problems at the last minute during customer testing. The first problem was due to a global decision made on the location of the Oracle 10g client. The team lead assumed that all computers should C: and D: local disk drives. To save space, the decision was made to put the Oracle 10g client on the D: drive. The first time the client tried to run our Oracle client install script, it bombed since D: is mapped to the CD-ROM on the client workstations.

It is often the wrong assumption that gets you in the software development world. The effect of the assumptions could have been mitigated by following through and confirming the assumptions with the client. Even better would have been to obtain some client machines up front and figure these things out ourselves. That was the initial plan. But somewhere along the way the plan got scrapped. Now here we stand scrambling to get the install scripts modified. Luckily I am not the install script developer. I wonder if the decision makers knew the risk when they decided to allow us to not get computers.

Our install developer has been on the phone all morning with a system administrator from the client trying to debug the install issues. This is most unfortunate. It is understandable if there are small issues once in while. But when a lot of things break down during the install, it does not look good for us. Perhaps the best way to proceed is to gather some lessons learned and make sure we do not encounter this problem again.

Interview Questions

I was at lunch the other day with a table full of software developers. There were a couple test team members present as well. I do not know how we got on the subject. But I started to ask some of the developers interview questions. I chose famous questions I had heard were administered at Microsoft interviews. So I started with one that I thought everyone would have heard about. Why are man hole covers round?

I posed the questions to two of the developers that I like best. One of them took a while to think about it. I emphasized that the decision on whether or not Microsoft would hire then would be based on their answer. Developer number 1 answered that it is easier to flip the round cover on its side and roll it if you wanted to move it. Since I knew man hole covers were heavy, I figured this was an acceptable solution. Developer number 2 was clueless. Normally this guy has a lot to say. But he just did not know. No hire.

On the Internet the most common answer to this Microsoft question, and the one that would presumably get you the job would be, “Round covers are one of the few shapes that help prevent the cover from accidentally falling into the man hole itself”. When you think about it, this is true. But I have seen wise guys write that you could also choose a special triangle shape that has the same properties. In fact, they chose this special triangle to cover some man holes in Britain.

You could get really cocky and say, like some others have written on the Internet, that the covers must be round because the man hole is round. That may be true. But it will also most likely end your Microsoft interview unfavorably. I like some other unique answers to this question I have seen on the Internet. For example, one dude said that the round shape has been proven to have the best ability to withstand warping.

Disappointed by the answers (or lack thereof) to the first question, I proceeded to another question for which I have not heard any answers. “How would you move Mount Fuji from one side of Japan to the other?” This question stumped Developer number 1. However this time Developer number 2 thought he was smart enough to answer. He said he would take a picture of Mount Fuji. Then he would move the picture to the other side of the island. I told him he would not get the job at Microsoft with such a foolish answer. My own answer involved a complicated deconstruction and reconstruction of a lot of smaller pieces of the mountain. Most people at the table agreed that I too would not get the Microsoft job with this answer. Do you have a better one?

Design Tasks

Our project at work has received a major set of new requirements to implement for next year. First the team worked together on migrating the project to use a new set of development tools. We are winding down that effort, researching and resolving the problems found by internal test. The lead of our team decided it would be best for the team to get together to collectively design the solution for the new requirements. The software development manager had put together a schedule for the new requirements. Design was supposed to start Monday. But the application development team lead had requested we delay a day until the whole team could get together to work on the design.

Tuesday morning I got a call from the software development manager. I told him that we were delaying work on the design until the team could get together. He said that was definitely not the plan. He wanted me alone to work on the design while the rest of the team continued to fix migration bugs. I recommended he get in touch with our team lead and get on the same page. Since I sit near our team lead, I told him what the software development manager said. I also reminded him that the manager trumped the team lead. So I told him I was going to get busy on design unless he could make the manager think otherwise. We got a group email stating that I would be the only dude who would be assigned to the design.

I think I am a rarity among developers in that I actually like investigating and resolving bugs. This is one of the reasons I have been on this current project for so long. It is a big software maintenance project. There are many opportunities to work on bugs reported by developers, internal test, customer test, and production users. It also seems like there is always some weird and interesting problem to investigate. Most developers do not like the maintenance side of the work. I think some of these guys are getting jealous that I get to work almost exclusively on design for quite a while.

Now I am not sure why I got the bonus of being assigned exclusively to design. I know the manager is aware of my abilities and has no doubt I can get the job done to meet the schedule. I also think he has set me aside to work on “special projects”. Mostly these tasks are not glamorous. They involve review of new requests, along with analyzing and costing impact to the system. I also get pulled in to participate in customer conference calls. That is just not any fun for developers. Perhaps this design task is my reward for doing well in these special projects. I guess I should not question it. The manager is the boss after all right?

Meeting Time

Yesterday we had our normal weekly development meeting. These meetings can get very mundane. We go over the status of development each week. The development manager polls the team leads. The floor is opened for discussion on any number of topics. On good weeks the meeting lasts an hour. But some weeks is can go on for hours.

This past meeting was a bit unusual. One of the developers did not show up. I know she was in the office that day. So we went on without her. In the middle of the meeting she showed up very late. Hey. I guess things happen and you can’t make it on time. There is no problem with that. But this person started getting antsy when the meeting ran past an hour. She stated that we should not discuss issues since she needed to get back to work on development. On the surface this sounded like a good idea. However I thought it was a bit undeserving as she was late to arrive to the meeting anyway.

I stayed in the meeting room for a follow-up meeting to discuss some issues related to our team. We were able to knock out a plan and get out of the conference room in a short amount of time. I was a bit suspicious when I returned to my cubicle though. The developer who was in such a rush to exit the initial meeting was busy on the phone talking with what seemed to be her friend. And the call lasted a long time. Maybe now I see what the rush to exit the meeting was – the need to do some personal business. That is not too good at all.

There might be some extenuating circumstances. This developer has been asked to help out another team with a tool she is not very familiar with. So she is not having any fun. But it takes a lot of nerve to pull some stunts like this. I thought back to a bigger meeting where this person called attention to the fact that she was taking on such a difficult task and making good progress. When you stand out like this at meetings, you had better make sure you are backing your words up with action. I am sensing a lack of action here in the trenches.

I find it interesting that I do not feel this way towards other developers who keep a low profile in meetings. They sit back and do not cause much of a commotion. So when I find they are chatting up their friends or leaving early I do not think any worse of them. I guess maybe I need to dig deeper into what is irking me. It must have something to do with behavior in meetings. Maybe I should have taken more psychology classes in college.

Installation

We support a suite of applications on our project. Each application essentially has its own installation program. The programs get updated at least once a year for the customer. Last year there were a lot of changes required. Unfortunately it took a long time to nail down the requirements and get the changes approved. We did not have enough time to include all the changes in our initial delivery to the customer. So we put together a staged set of follow-up deliveries to include the missing functionality.

There were a number of challenges to get the full set of requirements implemented. We had more configuration management issues to develop and maintain the code at the same time. And we also encountered all the normal challenges which lengthen the development cycle. As a result, we had to take some shortcuts for some parts of the new functionality. One example is that we had to use Microsoft Excel for some of the data input. We put together some Excel templates with some macros which worked with the application and back end database.

When it came to release time, we were once again getting behind schedule. So we had to take some more shortcuts with the Excel templates we were shipping out. I think we just created an executable that was a self extracting set of spreadsheet that went to a well defined location on the file system. You would think that this is no big deal. But now you needed to install the application, then run this Excel spreadsheet extractor program to get the full implementation of our software.

Shortly after this last release, another company won the contract to maintain the software. This added to some confusion over the new application. I overheard some testers for the client stating that the application no longer had the spreadsheets on their workstation. And then the new company’s internal test team found the same problem. Apparently the development from the new team did not know to include the Excel spreadsheets with the application install. I got picked up by the new contractor. All it took was for me to figure out what was missing, and how the install was supposed to work. Then the new install developer was able to do the install right. These new spreadsheets now get installed when the main application gets installed. There is no more need to run a separate install for the Excel spreadsheets.

Training Budget

I have had all kinds of experiences with companies having a training budget for employees. A couple years ago I worked for a very generous company. They were small. One of the benefits was that you had a guaranteed annual training budget. You could spend $2500 on any class you wanted. The problem was that you only got 3 days off from work for any given class. Most classes I liked were at least 4, and sometimes 5 days long. I still managed to take a couple training classes while working at that company. They were slow to pay the training invoices. So I got hounded by the training company billing department. But it was a price I was willing to pay.

Then there were other cheaper companies that I worked for. They did not prohibit training. You just did not receive any by default. It was as if you needed to fight for your training. Managers acted as if they got a bonus if they did not approve your training. And there was no paid time off for training. If you did not want to take vacation, you somehow needed to convince a client to pick up the tab for the hours spent at training. This seemed a bit unrealistic. Lucky for me, I had a lot of friends in the client organization. I always got my hours approved and paid for during training.

My current company is a mix of good and bad. I do get a yearly budget for training. And I also get some paid time to be able to attend the classes. The trouble is that I need to get a million approvals to be able to attend training. It starts with my team lead. Then I go to my functional manager. Next I hit up my administrative manager to start the approval process. And my admin manager’s boss has to give the final approval. From the lack of training taken by my coworkers, I get the feeling that this rigid approval process is working to deter employees from taking training.

There is another twist in my employer’s training setup. The rules for getting the training paid seem confusing and complicated. They are written so that I have to pay for the training first. Then I need to apply for a reimbursement. In effect I have to float the company a loan to get the training. I think there is a way around this. But there is some trick to it. Time to get busy and keep fighting until I get my training approved and paid for. Nothing worthwhile is easy in this life. Paid training is no exception.

MDAC Woes

Our legacy suite of applications uses a number of technologies to access an Oracle database on the back end. Some applications use ActiveX Data Objects (ADO). Other applications use OLE DB. We install Microsoft Data Access Components (MDAC) when we install our applications to ensure that ADO and OLE DB are available. Previously we had MDAC version 2.6. Recently we upgraded our development tools. So we also planned to upgrade to MDAC version 2.8. The install scripts were upgraded to reflect this. When we went to internal test, a lot of pain broke out.

Some testers installed our application and had no problems. The other half of the testers installed the application, found a few features working, but ran into a lot of database access errors. I should have mentioned that some of our applications use Oracle Pro*C some of the time to access the database. The Pro*C relies on the Oracle client. It turns out this code worked regardless of the state of the MDAC install. But the ADO and OLE DB calls were bombing on half the test workstations.

A number of developers were out of the office. So my team lead asked me to do some tests on the failed test workstations. I went out and downloaded the MDAC version 2.8 installation executable from Microsoft. Apparently nobody on the development team put this where I could access it. That alone should be a sign that something is amiss in development. I tried to manually install MDAC version 2.8 on the tester machines. But each time I did, the install stopped and stated that Windows already has this functionality as part of the operating system. Foiled by Microsoft?

At this point I determined that it was time for the install boys from the development team to actually come in to the office and find out what was wrong. They heard about the tests I ran. And I heard them postulate that it might have something to do with the wrong version of the Oracle client being installed. That sounds wrong. But I am going to defer this to them. Maybe the fix is to revert to Pro*C code for all the applications. That is a joke. At least the application is working on my machine.

FGL Modifier

This year we upgraded to new versions of our development tool set. As a result, we needed to modify our Installshield packaging projects. While doing this we decided to make some improvements. Previously we had hard coded paths in the InstalledShield file groups. These paths got stored in FGL files. This would cause some problems when you were developing under a different folder than the hard coded path. The old way to deal was this was to standardize on the folder for linked files, especially when you were packaging the application for distribution via Installshield.

Our change for this year was to be able to run the Installshield for source files that were located in any folder. We gave this task to a Visual Basic developer. He in turn created the FGL Modifier. The FGL files are just ASCII text. The paths for the linked files are contain in the FGL files. So the VB developer replaced the paths in the FGL files with sentinel values. His VB program then scanned these files, and replaced the sentinel values with a path from the Windows registry.

This was a noble design. The problem was that you needed to know how to use the FGL Modifier program. I tried running it but it gave me an error. Then I got help from the VB developer. He told me I needed the special FGL files that had the sentinel values in it. I got these but still encountered problems. So I was given the secret information to pass the program on the command line. Wouldn’t you know it? The darn thing still had an error and bombed. I have not given up yet.

In the end I expect the FGL Modifier program to work correctly. But packaging the same code from different directories should be a common problem. I wonder why Installshield did not provide this functionality built into the software. Maybe we just do not know how to use Installshield correctly. Then again, we use a very old version of Installshield (5.5). I imagine the latest and greatest version is v10 or 11 or 12. Perhaps configurable paths for file links are supported by now. Does anybody out there know? Or do people use MSI now for packaging? Inquiring minds want to know.

View Types

We have automated scripts which build our application. This is a good thing. The build process gets the latest version of code from our source code repository. The post explores the decisions made on how we identify and get the latest version of source code to build.

Let me first describe our current implementation. We label all files that are the latest version. Then we create a config spec that gets all files with this label. We finally use a dynamic Clearcase view to access a view with this config spec. The code from this view is copied to the local file system where the code is compiled

Recently we had to modify our build scripts to use the new tools we have upgraded to. A developer then questioned why we used the current technique to label and access the latest code. Another idea would be to only label those new elements. Then you could get the latest code by getting the code with the latest label, or the last label, and so on. The advantage of this is you consume less Clearcase resources by eliminating a new label on every file for every build.

Another variation on the build is to use a snapshot view. This view can be anchored on the local file system. Then you only need to refresh your view to get the latest code on your local drive in one operation. According to our configuration management experts, this activity will take a lot longer than using a dynamic view and copying the contents of the view to the local disk drive.

In discussions like this, I like to focus on the requirements to help determine the best solution. Our business goals for our builds are to obtain the latest version of the code, and to be able to easily retrieve the full version of source code used to perform any past build. Creating a label on all files for each build makes this last requirement easy. There are many paths to be able to get the latest code. I don’t think any way is better than another to get the latest version of the code.

After a discussion between development and CM, we decided to leave things the way they were. The developers who worked on porting the build scripts to the new tools had some troubles getting the views set up and working with the old technique. We might be making some strategic changes in our build scripts. I will keep you posted.

Secure CRT

This year our team decided to migrate from Visual Studio 6 to Visual Studio 2005. At first we planned to make use of the new Secure CRT functions. This may have been a premature decision. Some developers decided to pass on this change. In effect, most of the string functions have new secure counterparts. An example is that there is a new strcpy_s function that is similar to the old strcpy. The new version takes an extra parameter that represents the size of the destination buffer.

In theory I guess the Secure CRT functions are a good idea. You hear a lot of stories about malicious users taking advantage of buffer overflows to attack a system. However we have a number of developers who are not strong in C/C++ development. It was difficult for them to grasp how one should pass a length to the Secure CRT functions. Many errors were introduced when they passed in the size of the pointer instead of the length of the memory that the pointer referenced.

I have not dug much into the implementation of the Secure CRT functions except when there were errors. You would think that there is an overhead associated with counting each character in a string operation. There is no easy way to tell whether there is too long of a string being copied into a destination location unless you actually count the input string. At times I have wondered whether you can mix the secure and non-secure string functions together. My theory is that this is not a problem, since a string is a string is a string.

It surprised me that I had not heard much about these Secure CRT functions before attempting to port to Visual Studio 2005. Or maybe other developers had mentioned it. But until you actually have to deal with it, “Secure CRT” really does not mean anything to you. The good news is that the old versions of the string functions have only been deprecated. They can still be used. It will only cause some compiler warnings to be generated. And you can choose to turn those off if you like.

Stack Overflow

My team maintains an application suite. Some of the applications are almost 15 years old. A lot of the code in the older applications has not been modified in a long time. The older code accesses the database using Oracle Pro*C. This sits well with me because I am a C programmer at heart. Recently we ported the application suite from using the Oracle 8 client to the Oracle 10g client. We encountered a number of problems with the Pro*C code. Right now we have resolved most of the problems.

While debugging some infrequently used code, I discovered all types of disturbing Pro*C code. I should have known something was amiss when I found some Pro*C functions that were very long. This was a clue that somebody was not following good design. Then I found a lot of SQL which was executed. By itself this is not a problem. However the developer created separate string variables to hold each of the SQL statements. The result was a huge declaration section in the function.

It was the last issue that brought the most alarm to me. Many of the variables used to hold the dynamic SQL strings were huge. We had code that looked like this:

char sql_string1[10000];
char sql_string2[10000];
char sql_string3[10000];
char sql_string4[10000];


This seemed crazy. There were huge arrays for strings all being put on the stack. I always assumed this would not work since there is a limited stack size set apart by the compiler. In fact there were a couple such functions which just did not work. So I converted them to be strings with memory allocated on the heap. Maybe these were amateur C programmers that initially coded the application. The amazing thing is that this has worked for such a long time.

Perhaps the true solution is to move all the Pro*C to a more modern database access technique. However the C programmer inside of me advises against such a move.

Obfuscation is Bad

Recently we ported our software to a more recent version of Visual Studio and the Oracle Client. The last application we delivered to test had some problems. Initially the work for this application was assigned to another developer. But when there were problems it somehow got reassigned to me. So I stepped up and took ownership of the problem. I was easily able to replicate the problem. However the code worked fine in debug mode. So I reverted back to the most basic of debugging techniques I know. I added print statements all over the code to locate where the application was bombing.

At first I found that all of the database calls from one of the modules were failing. Upon closer inspection of the project properties, I found that the release version of the module was still connected to the old version of the Oracle client. So I updated the project to use Oracle 10g. This got me a little further. But it did not completely resolve the problem. A bunch of print statements later, I discovered the problem was located within a C function called “trim”.

An educated guess made me believe the “trim” function did something like removing trailing white space or something. A quick scan of its source code made me feel a bit ill. There were absolutely no comments. That’s normally OK as I can read code as good as the next guy. But it seemed as though the function was obfuscated, either by design or by poor programming skills. There were lots of variables defined and used in the function that had really cryptic names. Some examples of the variables are tmp, tmp2, str, wk, and ln. There was also a bevy of if statements nested within each other that added to the confusion. The only thing that I knew for sure was that this function certainly was not just stripped the blanks off the end of a string.

Once again I relied on the good old print style of debugging. I found that the application was trying to write to some memory that was supposed to be protected. Unfortunately the compiler did not catch this. And somehow it worked fine in the debug version. So I fixed the problem. Then I did my duty and added some comments in front of the function describing what the heck the thing did. Now I did not go as far as renaming the variables. But I added a bunch of comments each time the variables were used to give a maintenance developer a chance of understanding what was going on. Now we just need to make sure we don’t write any more cryptic functions like this one.

Going Virtual

My company started up the current software maintenance contract in a strange mode. At first we bid the contract for developers to work out of our company’s headquarters, doing all our work remotely for the client. At the last minute the details were changed per the request of the customer. Instead we were to report to the client’s site to work. There was one big problem with this plan. They did not have cubicles for us to work out of. So our whole team ended up working out of a big lab at the client’s site.

Life at the lab is suboptimal. There are many challenges with performing work in this environment. One of the problems is that the network connectivity is not too good in the lab. Many of our developer servers are located in another building. The network speed between the lab and that building is slow. It takes forever to access code in our source code repository. Most of the time to do a software release is spent getting the latest version of code from the Rational Clearcase server.

The team brainstormed a solution to the slow network speed. Our best choice was to do our development work on virtual desktops located in the same building where our development servers are located. Now renting these virtual desktops come with a cost. But I think we pass these costs on to our client. It also takes a while and a lot of negotiating to get the system administrators of the virtual desktops to make configuration changes that we need. I am lucky in that I have admin rights on the virtual desktop that serves as our build machine.

We got a new Configuration Management team member. He had a task from his boss to find out which virtual desktop the build machine runs on, and to gain the ability to remotely connect to that machine. This CM guy came and asked me for information on the build machine. I told him the domain name of the virtual desktop. He came back telling me that he could not log on to the machine. I explained that there was a team that administered the virtual desktop machines. This team needed to grant the access he required. It is probably going to be some time before our CM guy gets access to this machine. There are benefits and drawbacks to going virtual like this. Right now we are experiencing some of the pain.

Review of C

By profession I am a C++ programmer. However I first learned the C programming language. Officially I only worked for one year on a job that exclusively required C. Ever since then the gigs were C++ programming jobs. However C++ is really an extension of C. And you can use a C++ compiler to work with C code. In fact, you can write C++ code but really be writing mostly C style functions.

The hard core programmers on my team are also C++ programmers. Every once in a while we get a Java or Visual Basic programmer who is moonlighting as a C++ programmer to pay the bills. Depending on whether we like these programmers, us C++ guys will either empathize with them or tease them. One current member of our team is a VB guy. And he gets the short end of the stick. The boys usually treat him like a substandard programmer. Because everyone knows that basic is not a real computer science programming language.

To get back at me, the Visual Basic developer gave me a copy of C Primer Plus. His intent was to inform me that, although I may be a C++ developers, I need to brush up on my C programming skills. Today I had to reinstall my Oracle Reports Builder software. This took a long time. So I decided to skim the C Primer Plus book. The table of contents mostly outlined topics I knew well. However I found the section on variable types quite interesting. So I decided to read and study up on it.

Here is a list of the different variable types in C. Do you know the difference between them? What is the scope of the different variable types? How long does the variable last and retain its value? Maybe the answers will be the topic for another post.
  • Automatic variables
  • External variables
  • Static variables
  • Register variables

Credit Due

I was silently patting myself on the back for persevering through a hard reports problem. In the middle of the night I came up with a plan that eventually resolved the problem. This is what I like to do – fix problems. Our current maintenance project is a perfect source of fun and tough problems to resolve. Then I got an e-mail from the test team. It was congratulating our reports developer for installing the patch that resolved the problem. I did a double take. There was no mention of me. I was the guy who did not quit on this bug. And I was the dude who brainstormed the fix for this problem. It was only because I was sharing my plans with the reports developer that she was even involved. In fact, the reports developer had initially decided to let another team research and resolve the problem. But here I was staring at an e-mail carbon copied to everybody giving her the credit for the fix.

Now I took a step back. Before acting hastily I usually try to think logically about events like this. What was the real impact of another person getting the glory for the work I did? In actuality, I don’t think there was any real impact. And the reports developer knows who cracked the case on this problem. The tester whose machine got fixed knows who did the heavy lifting for this problem. I assume the big dogs on the project know my contributions to the team. So perhaps I was making much ado about nothing. However I wanted to make sure that I was not getting walked on. I do not think that was the case here. You always hear that you have to toot your own horn. This scenario may not be the time to try to toot my horn.

After thinking about this situation a bit, it reminds me of a much more drastic example of somebody taking credit for my work. There was a high priority problem in the production environment a few years ago. I did the initial research on the problem and mapped out a solution to it. As always, I documented all of my findings in our trouble ticket control system. Apparently a manager had asked the test team what they knew about the problem. And it seems the test team looked up the problem in the trouble ticket system, and provided the information to the manager. Then the manager broadcasted a message to just about everyone (including the customer and myself), praising the work of the test team in analyzing and determining a resolution for the problem. Now that really stung when I read the e-mail. It was blatantly giving credit to the wrong people for the work. The e-mail even quoted some of the text verbatim that I entered into the trouble ticket system. Now that is some gall.

Luckily I let this last episode slide as well. I figured there was no real benefit to me if I exposed the situation for what is was. Instead I just chalked this up to a mistake in communication somewhere. I my goal would be to automatically not get excited over any snubs similar to those that happened in these incidents.

Patch to the Rescue

Previously I had blogged about a problem with reports spawned by our application. Some testers received errors from the reports. The error message stated that there existed uncompiled Program units in the report. I felt some responsibility for this problem since it was my application that launched the reports. And I found out about the problem before our reports developer got in that morning. So I did some research. I shared the results of my research with our resident reports developer when she got in. Her plan was to defer the problem to the DBA Team. She thought the DBAs might reinstall the Oracle client to resolve the problem. That made no sense to me. So I continued in my investigation of the problem.

Yesterday I had fully understood the behavior of the problem. But I was at a loss for how to resolve it. It felt like a machine configuration issue. So I resorted to the evil advice of requesting the testers reinstall the Reports Server on their machine. It is the Report Server programs that actually run the reports and were displaying the error messages. I figured it was worth a try. I knew we were in trouble when the testers replied that a reinstall of Reports Server did not fix the problem. I decided to leave for the day because I was getting nowhere with this problem.

At home I took a nap in the evening. Then I woke up in the middle of the night. I tried to get back to sleep but could not. This reports problem was in my mind. Sometimes you need to forget all the details and all your ideas about a problem to generate novel approaches to solve the problem. So I went all the way back to the initial description of the problem. The testers got an error message stating that there were uncompiled Program units in the reports. Now the reports developer had encountered this same problem in the past. However the prognosis was that the reports developer installed the Reports Builder tool. And that installation affected the Reports Server. The fix for that problem was the installation of the latest patch for Reports Builder. That somehow also corrected the Reports Server on the developer’s machine. However a big deal was made about this patch. The party line was that this was only required on developers’ machines which had the Report Builder installed. If this was fact, then there would be no need to install the latest patch on the tester machines. However I wondered if this was indeed fact or just conjecture.

The next morning I shared a plan with our reports developer. I told her that we could try applying the latest Reports Builder patch to the tester workstations. Then we could use the results to further understand the nature of the problem. The reports developer had a copy of the patch executable on the network drive. So she ran over to the tester machine and had then install the patch. The test team visited me, happily showing off the successful prints from the reports. Apparently this fixed the problem. Now we need to worry about pushing out the patch to all our users. However the mystery of the reports problem had been solved. I had mixed feelings about the success. It was good that this was not a coding problem that required a software fix. However this now meant that we had to push out a big patch to a lot of customer machines. My team lead did not worry about this too much. We were already planning to push a big install which upgraded customer workstations. One more patch would not hurt according to him. I wonder if, having resolved this troubling problem, I can go home for the day.

Report Problems

I read an e-mail this morning stating that we need to ship a build today that corrects problems found during internal test. So I scanned the list of problems in the test summary report. One of the bugs was in the application that I am responsible for. The trouble ticket generated stated that the testers got an error message when they ran a specific report. This error was a REP-0736, which means “There exist uncompiled Program units.”

The tester who reported this problem had not come in yet. The developer who normally takes care of reports had not come in yet. So I decided to get busy on this problem. The first thing I did was fully uninstall the version of the application that I had on my development machine. Then I installed the version that the testers had installed. Unfortunately I was unable to duplicate the problem. I also tried running all the reports in the application. I made sure that I was logging into the same database that the testers use. However I could not replicate this bug.

All morning I kept an eye on the door. Then I saw the tester who reported the bug come in. I gave her a few minutes to get ready for work. I then talked with her about the problem. This is when I received some key facts about the problem. Two of the testers encountered the problem. Two other testers could not reproduce the problem. I had her show me the problem in the application while running on her machine. I proceeded to log into the application myself on her machine and got the error. Finally I tried some of the other reports the applications runs. None of them worked either.

So I reviewed the facts. Not all testers get the problem. I do not get the problem on my machine. But I do get the problem on the tester machine. Both the tester and I decided there must be a configuration problem on the machine where the problem occurs. This theory was given more credibility when the tester confessed that she installed the Reports Server software on both machines where the problem was occurring. The next step was to get the tester to reinstall the Report Server using the same instructions that I used when installing it successfully on my machine.

We are already doing the build for the release of all fixes to test today. So I am hoping that this problem does not require a code change. I do not think it does. If it turns out the correct reinstallation of the Report Server from Oracle fixes the problem, I say it will be time for me to go home as I will have earned my pay for the day. When the reports developer came in, I touched base with her to let her know what I found out and what the tester was trying to resolve the problem. I was a bit disturbed when the reports developer told me they knew about the problem and was hoping a DBA would go reinstall the Oracle client to see if this had any bearing on the problem. This route is akin to asking somebody to reboot hoping that this will fix the problem. Yes you may get lucky once in a while. But it is no way to do organized and logical debugging of problems.