Bad Day at the Office

It seems the gods did not want me to be productive today. Started off with me oversleeping. That by itself is not a show stopper. But then just as soon as I arrived at work, I needed to attend a going away luncheon.

On the way back from lunch, they close all lanes on the highway. It takes me forever to make it to the next exit to get off the road. By then I knew this was going to be a very bad day.

When it rains, it pours. All the networks go down an hour after I make it back to the office. And they remain down for the rest of the afternoon. Help Desk said we were warned.

Is there any moral to this story? Could be that the early bird gets the worm. Note to self - get in early next week.

Cover Letters

I was reading advice for software development job hunting. Somebody was advocating the use of creative cover letters. Apparently the boilerplate variety were getting too tired for this employer.

Normally I am all for mixing it up to stand out. However the example provided was a bit unusual. The employer said he got a cover letter from a girl. And the girl included interests like "long walks on the beach" and "romantic candlelight dinners" in her cover letter. WTF?

Who knows? Maybe if I were hiring and a supermodel were applying, it would help her to put unusual phrases like that in her cover letter. By why bother with that?

If you are a supermodel, you can just attach your picture to your resume. No need to waste time. I probably wouldn't penalize a supermodel for spelling mistakes either.

Last Call

I was about to check in a SQL script to go along with a big change to our application. The change affected the way the software starts up each year. But the delivery date for the change was after the start up occurs next year. So my script basically rolled back the startup data so my new code could be used.

The more I thought about this rollback script, the more I felt like it was cheating our users. It takes a lot of customer effort to start up the new year. Why make them go through the pain twice next year? I could blame it on the schedule. But that would be irresponsible.

So I asked a memeber of our requirements team to set up a meeting with our client. I wanted to make sure they understood the prior plan to execute my script, undoing a months worth of startup work. The goal was to discuss a better way to handle the task.

Our project manager likes getting carbon copied on important e-mails like this. He responded with some ideas that did not make sense. So I went to have a talk with him. He threw out some ideas on how to solve the problem. I provided him with the pros and cons of each idea. In the end we agreed upon a solution that does not create a lot of work for me, and saves much time for the customer.

I find it refreshing to work under a project manager who actually started out in development. They can slip back in developer mode and understand the technical issues I am dealing with.

Going the Distance

I unit tested a fix to a customer problem. Configuration Management did a build with my code changes. Before shipping the fix to our customer, our test team verified the fix. They came back and told me the application was aborting. Sure enough the CM build was blowing up on my machine too. The previous build was fine. I figured it must be my code.

So I got the latest copy of the code, but could not duplicate the problem. Next I went and retrieved an exact copy of the code that CM used for their official build. We actually have configuration management practices that let me trace the version of every file used in the build. But even with this version of code, I could not make the problem happen in Debug or Release mode.

The CM Team recommended that I let them try to do a build again, in case this was a one-time fluke. No such luck. The problem persisted even in the new build. So I went to the build machine and ran the application built there from Visual Studio. Could not make the problem happen. At this point I knew we had a very strange problem on our hand.

So I went through our whole application suite, trying each executable and DLL one at a time. I still could not make the problem happen after copying all our target files to the development folder. So for kicks I tried copying some of the system files we deploy during install. And this is where I stumbled upon the problem. Turns out the MFC DLL we ship was not getting properly installed into the Windows system directory.

Since I was in a hurry, I just changed our install code to copy a local copy of the MFC DLLs to our application directory. This works but is not elegant. Unfortunately it may have to stay this way.

Snake Sounds

So there I was investigating some problems in our latest build. All of a sudden I hear another developer cursing. Now this is nothing special. I myself curse sometimes when things go awry. But here comes the strange part. I then started hearing the developer make some hissing sounds. This continued for quite some time.

At first I thought I had better investigate. Then I decided against it. I already had a lot to do today. So I just chugged along. I did feel sorry for the person who had to share a cubicle with snake-woman. For you Harry Potter fans, Salazar Slytherin would be proud.

Vista Lock Down

Sometimes software applications have security built in from the start. I think this is how Microsoft Windows Vista was designed. We got our home PC locked down so that the apps are rated G.

With it being Thanksgiving, we have some family visiting us from out of town. Grandpa wanted to play some PC games with the kids. He is a a computer tinkerer. And he was able to temporarily bypass the Vista restrictions and install a game rated M for Mature on our PC.

Here is a where the funny part comes in. Gramps either logged off then back on, or rebooted the machine. He then found out that Vista located the game that violated the ratings policy. Vista then proceeded to uninstall the game on him. LOL. He came and asked us for help in turning the Vista security off for good. Nice try guy.

The New Guy

Developers have been leading the charge to code new functionality in the app. Due date is next Monday. Too bad we have not tried to do a build with all the new stuff yet. Sounded like a job to hand off. So it got assigned to the new guy.

New guy did not make much noise. So I figured he was able to do the build successfully. Luckily our team leader kept checking in with new guy. Seems more like the new guy kept quiet because nothing was working.

Our team lead got fed up and started to worry that the build would not fixed in time. So he sent me in to help out. First problem was missing custom build instructions for a Pro*C file. Second problem was missing include directory options. Final problem was harder. But in the end we found a developer was removing files from the source code repository that the build expected.

Nobody expects a new guy to step up and figure out tough problems on his own. But we do expect you to dig in and do the hard research. More importantly, you need to speak up when you need help. Otherwise we think everything is OK and find out the truth too late.

Software Business

I like to read articles about software development. Mostly I choose what is popular on reddit. A recent link brought me to a WTF post that had a whopping 171 comments. So I knew I had to read this story.


Apparently a novice programmer got a job as a technician in a PC repair shop. In his down time, he coded up a customer work order system. This replaced a piece of garbage system that cost the owner $1500.

Then this novice programmer wrote a Computer Cleaner application from scratch. The app was done in time to market for the Xmas season. Preorders alone brought in $50k of revenue. Altogether the work this novice programmer did increased revenues from $20k to $350k in one year.

Turns out the not-so-novice programmer asked for a raise from his paltry $22k a year. In the end he got fired. I consider that a good thing. If this guy could rocket revenues up to $350k a year, he needs to start his own software business. That way he can pocket the big earnings for himself.

Clearcase Confusion

We have two parallel effort of development going on right now: the baseline work and the new changes. When we started doing the new changes, we have the configuration team set up a separate branch for the code. Since our source code control tool is Clearcase, it seemed best to let the "experts" do the dirty work.

Our goal was to keep the new change branch be a superset of the baseline branch. Any time we made a change to baseline, we were supposed to also make the exact same change to the new change branch. At some point in the future we plan to fold the new change branch into baseline making a new baseline.

Today I was trying to promote some baseline fixes to the new change view. The changes were significant. So I asked our developer most familiar with Clearcase whether I could just copy and paste the baseline version to the new change view. The Clearcase expert told me to just graphically merge the files using Clearcase. I felt a bit uneasy about this. But I thought I might as well give it a try.

At first I could not see the right branches to select where to merge to and from. Seems that I first have to check out the file on the destination branch before seeing it. But even after that I could not check in the resulting changes from the merge. Apparently the act of checking out the file caused the new version to "move" from the baseline to the new change branch.

After some meditation I realized my visual model of the Clearcase branches was not correct. I had assumed that when we branched out, a snapshot of all the version in baseline at that time would be the starting point for the new change branch. But this was not so. The new change branch equals the baseline branch for every file you have not started branching out new changes into. Who would have thunk it?

Maybe I will figure out a way to describe this version control configuration in a picture. For now I hope my words suffice. And as usual, you should be careful which abstractions and assumptions you make.

The Build Saga

Today a second developer came and asked me to peer review their software release documentation. I marked up his hard copy with a lot of red marks. And I guess I should feel good that the peer review is adding a lot of value towards generating a high-quality release. But an ideal scenario would be a minor correction here or there in during a peer review, not a massive rewrite.

Here are some of the problem I found in the software release documentation:
  • Wrong build number listed
  • Wrong file timestamp listed
  • Not documenting special circumstances of release
  • Filling out sections of the document that our customer reserves for their use
  • Filling out other sections incorrectly, violating customer policy

These are the basics that we should always get right. Perhaps developers and configuration management know they can be sloppy since we have a process of review that catches most of these problems. Or maybe people have Thanksgiving fever and just want to slap something together and go on vacation.

I know I personally perform quality control on documents I generate before I pass them on to others for review. But that's just me. Maybe that's what our process is really all about. No matter what the skill or level of effort performed by the staff, the process still is set up to indentify and resolve problems before we ship stuff out to our customers. Now this usually only works on big projects with big budgets. If I worked at a startup, I imagine there would be minimal process and more individual responsibility to get things right the first time.

Bumbling the Build

A fellow developer coded a change to fix a problem reported by our users. He scheduled a formal release of the application involved. The request went through our normal process. Configuration management actually did the build. We have set up the process so that development gets to review the build before it gets shipped out.

This is where I come into the picture. The developer asked me to do a peer review on the documentation for the release. So like any good reviewer, I pretended like I was the customer and followed all instructions in the release document. Found a couple clerical problems that could be fixed real quick. I also found a show-stopper: the install program did not install the application. So I informally used my veto power and held up the release.

When I went to discuss this main issue with the developer, he said he also found that the install program did not actually install the application on his computer. But he said it worked on another machine. At this point, the install appeared to only be working 1 out of every 3 times. I don't like these odds.

Due to the fact that this was a critical release, I volunteered to dig in and find out why the install program was not working. Our build scripts are written with Apache Ant. The scripts call Visual C++ to produce the EXEs and DLLs. The scripts also call Installshield to convert these into install files that we deploy. I think the scripts also use WinZip and/or Ant to turn the final set of files into one self-extracting executable that we deploy.

So I started by manually extracting the files. Everything looked good. The I ran the install in verbose mode. No errors seemed to come up. I tried closing out all other Windows apps before running the install. No luck. I tried uninstalling a lot of other applications first. Still no clues. Finally I ran the install in verbose mode one more time and looked for anything unusual. Even though the install went by fast, I saw some of the files it was unpacking and installing. These files were not part of the application that my coworker was trying to release. These were from another application in our suite. That was it.

Turns out somebody took the install executable from one of our other applications, renamed it to look like the latest release we needed, and passed it on to development. This in and of itself was a heinous act. But the real crime would have been if we allowed this release to go out even after crucial problems were detected during peer review. Luckily the our process saved us.

Scrollable Dialogs

Our system at work has a lot of big scrollable dialogs. The user interface is kind of weird. But for the most part it works. Our customer was testing the latest version of the apps and kept having problems on one screen.

Apparently the one combo box that intially gets the focus kept changing when the user scrolled the mouse wheel. This even happened after the user clicked on the vertical scrollbar.

The reaction from most people on the project was to say thats how the control with the focus behaves. However I take all trouble tickets seriously and did not want to blow off the concern. So I consulted what Microsoft Word does in this scenario. And sure enough - Word will not scroll the control in focus if you click on the scrollbar and scroll the mouse wheel. Microsoft Word scrolls the whole dialog screen.

So far I have only started looking at ways to quickly fix this problem. At first I hacked in a handler for WM_LBUTTONDOWN on the main dialog. The handler tried to to send a WM_KILLFOCUS to the control that had focus. Like most hacks this did not work. And what did I do? Make a more complex hack. I created separate hidden button to which I switched the focus on WM_LBUTTONDOWN.

The hacks only worked for some left mouse clicks on the dialog. If you click on the scroll bar area, you need to handle WM_NCLBUTTONDOWN because that is a non-client area of the dialog. This hackology was getting too deep without consistent results. So now its time to go back to the drawing board to fix this problem right. Any ideas?

Who is Phil Haack?

<-- Do you know this guy?

I like reading the latest posts of programmers links. A recent link brought me to a page about managing complexity. The page was apparently written by a dude named Phil Haack. But this was only his handle, not his real name. He claimed to be a senior project manager at Microsoft.

The more I read this guy's web page, the more I doubted he really worked for Microsoft. I mean the page was littered with Google ads. And he had a gmail e-mail address. LOL! What Redmond manager worth his salt supports Google publicly. The guy posted his resume and surprise: Microsoft was not listed on it.

I believe in doing due diligence. Since this Phil Haack peaked my interest, I decided to dig a little deeper. Then I found the info that cleared things up. "Phil" only recently got a job offer from Microsoft. He was going to be a Microsoft senior project manager at Microsoft.

It will be fun to see whether this Haacker changes his public image after joining The Borg.

New Features

At the start of our development cycle we negotiated which new features would be added to next year's software suite. A schedule was made to develop these features. Somewhere in the middle of development our customer decided to add 3 new major features. Since they were ready to pay us more money, these additions were accepted and added to the schedule.

Now we are at the point where we are supposed to deliver the first additional change to testing. Surprise. The software is not ready. The delay is not entirely due to the late addition of the feature, but mainly because it took a long time to agree upon the requirements and design. Apparently a lot is riding on this first delivery. The customer community at large is using this delivery as a milestone to determine the confidence they have in development.

Some of the comedy in this situation is that I heard even the week before the delivery there was new customer talk as to how the software should work. Luckily I have little to do with the first major feature. Those guys doing that development are putting in the overtime. At least they are getting paid extra.

Another funny thing about the situation is that you need to cut corners when you are behind. Instead of putting a spreadsheet control in the application, they are just launching Microsoft Excel. Sounds like a valid shortcut. But for a while there were even some problems doing this programmatically. There was some weird green triangle showing up in the Excel title bar. I don't much of the detail behind this because I am trying to stay out of it. There is just something hilarious about the whole ordeal.

Doc Power

My buddy used to say that "Documentation is the Power of Information". We have a ton of documentation that says how our system is supposed to work. In the good old days, all of this documentation was written with Adobe FrameMaker. When we published the docs, they were converted to PDF format. This worked pretty well. If you needed to make changes you did it in FrameMaker. Then you told the documentation team to cut a new PDF copy to release.

A couple things happened on the way downhill. At first our documentation team got cut, leaving us with contract documentation support when needed. Now we don't have anybody. So we have to sacrifice one of our programmers to be the documentation guy.

Another thing that happened was we went to Rational Rose. This in and of itself is not a bad thing. The theory actually sounded quite good. Do all of your design in Rational Rose. Then run a report to extract the design info to Microsoft Word output. The problem is that this report process doesn't really work too well. So now when we need to get documentation updates, we make the change manually in the Microsoft Word file AND update the Rose source.

Something is very wrong with this process. It is no wonder I have little desire to add all the good design information to the design docs. Somewhere along the way to having the system fully documented in a nice tool like Rose, we got derailed into a documentation nightmare. So much for documentation being the power of information.

Installation

We have a number of applications in our suite. And for the most part, each app has its own installation program. These installations were developed and continue to be maintained using Installshield Professional version 5.5.

Our installs do not do anything out of the ordinary. They unpack some DLLs and register them. Set up some keys in the Windows registry. Put some icons on the desktop and add links to the Start menu.

We have been doing fine with our original version of Installshield for so long that I did not even know that Installshield the company was bought out my Macrovision way back in 2004. This might be a testimony to how well this package does its job.

I have heard about some installation/scripting packages from Microsoft. Perhaps it may be time to look at them further if we ever get around to moving our apps to dot Net. But since we have a good size base of around 600 users, and plan to get a whole lot more next year, there is a management desire to go to the web. System administrators don't want to worry about pushing our apps out to the desktop when they can just update one web server.

Broken Build

We are about to release a new version of our application suite to internal test. All new code is stored in a separate Rational Clearcase view. One of our developers has been on an extended vacation for a couple weeks. Before she left, she checked all her latest code into the view. Found out today that her code does not compile.

Our normal build script gets the code to build from the main Clearcase view. I asked the author of the build scripts to give me a version that would build from the new view. With the new script in hand, I tried to get a build done. Compiler kept complaining about missing files. The way the scripts work is that they label the latest code, update a build view to access all labelled files, then copy the files from this view locally to build.

At first I thought the missing files were not getting labelled. A quick check in Clearcase showed some missing files were checked out by another developer. After a quick team pow wow, I got the developer to unreserve the files. Build still did not work. Then I saw that the label was getting applied correctly. Time to take a closer look at the build scripts. The evidence pointed to the scripts not updating the view to look at the correct branch in Clearcase. Got the script author to fix this and the build was a success.

Part of problem solving is to make hypotheses about observed events. The key is to verify that all assumptions are true before treating the hypotheses as fact. It is all too easy to skip this step. The result is that you travel down wrong paths and don't know where to turn.

Legacy Tools

We use a lot of older tools on my project. For example, we are still at Visual Studio 6.0. This tool came out in 1998. Since then Microsoft has come out with Visual Studio 2002, 2003, and 2005. That makes us three version behind the current version. Pretty soon Visual Studio 2008 will be coming out putting us further behind the curve.

There was an opportunity to move to a more recent IDE when the system was being re-engineered a couple years ago. Unfortunately the powers that be decided to switch to Java. The price was right since the Java IDE was free. However the re-engineering effort failed and we had to resurrect the Visual Studio 6 version of our code.

Currently there is not a huge business case to get the client to invest in Visual Studio 2005. That will require money to purchase the licenses for our entire team. A higher cost will be the work to port the current code to the new compiler. All of this will provide little to no benefit to the business users of our application suite.

I am thinking that, as some of the developers gain subject matter expertise, a good reason to upgrade is to keep the developers happy. It is a no brainer that hiring replacement employees is expensive. If we can tie the upgrade to a crucial business enhancement as well, we may have a slam dunk proposal.

Debug by Phone

Our client has a team that does system acceptance testing. They are all employees of the client. They keep us honest by independently testing our apps.

Recently this team submitted a trouble ticket. I could not duplicate the problem. The trouble ticket had lots of information. But I suspected part of the problem was their data.

In the old days, I was on good terms with this team. They let me log into their database. Now there is a whole new test team. And they are reluctant to give me a login. Therefore I have to "Debug by Phone" to resolve problems.

Try to picture the difficulty of running queries by talking to someone on the phone who does not know SQL*Plus. Bottom line is that I cannot solve their problems quickly. This is not good because I have a lot on my plate. On that note, time to get back to coding.

Testing Troubles

Our project has an independent team that does testing. Sometimes they need help reproducing the problems that get fixed. I just sent them a performance enhancement for a very slow app. They tried to duplicate the problem. But they just did not have enough data. So I gave them a script which created lots of data.

The tester got back to me and said the script was not working. He tried to run it a couple times. But it kept aborting after 10,000 rows were added. I got his database login and tried running the script myself. Same problem. This was strange because it worked every time in my development database.

The weird thing about the problem was that the error was not ocurring directly in my script. Instead it was failing in an audit database trigger. The problem always happened after the 10,000th row was inserted. I looked at the source code of the trigger. The code obtained a unique number using a database sequence. It then used the unique number as the key to insert records into the audit table.

It took a while to come up with the test that exposed the source of the problem. I checked all existing keys in the audit table. And wouldn't you know it? Some of these were higher than the numbers coming back from the database sequence (which was supposed to provide new unique numbers). After cycling the sequence past the duplicate keys, the script ran fine.

As a follow-up, I had a talk with our DBA Team Lead. Wanted to ensure this could not happen in Production. He told me I should have sent the problem in the first place. He knows I have a lot of important things to do other than debugging database issues. I will take him up on the offer soon.

The Build Machine

All executables we ship to Production are done on the official Build Machine. We are gearing up for next year's software. So we have been using the future year Build Machine recently. But the client said a current year problem had to be fixed now. Turned out to be an adventure to get a current year build done.

The first chore was to locate the instructions to build the current year software. It has been a long time since we did this. Luckily I keep hard-copies of important documents like the build instructions. So I started following the instructions. I fail early by not being able to remotely connect to the Build Machine. I ask the configuration management team for help. They tell me to come on up to their floor.

At the CM team's cubicle, I find a row of workstations all controlled by a single keyboard. I told them I needed to find the current year Build Machine. Figured the thing might have been powered down. The CM guy did not know we have separate machines for the current and future year software. I ask myself why we let them control our machines.

Since the CM guys are getting me nowhere, I track down the guy that wrote the build scripts. He said the System Administrators renamed the Build Machine. I was able to piece together the new network name based on some rules the SAs used. With that I was able to log into the Build Machine.

Next step is to kick off a build. The build script errors out fast. Apparently it is trying to access a Clearcase view that no longer exists. I check all my views. The missing one does not seem like one I remember using. Had to speak again with the author of the build script. He suspected the CM team changed the view name on us. I sense a pattern with the problems.

When I finally got the darn app to build, I updated the documents that show how to do a Production build. I also check in an updated script with the correct Clearcase view. No reason any other developer should have to go through this pain.

Peer Review

Developers on my team have a self-imposed policy of peer reviewing every change we make. This adds some overhead to fixing problems. But the benefits far outweigh the costs. I continue to be surprised by the value added when a second pair of eyes checks my work.

There are 2 main documents generated by a developer for peer review: (1) Code Diff and (2) Unit Test Plan. The Code Diff tells a review what you changed and why you changed it. The Unit Test Plan is an outline of how you debugged and verified your changes. Sometimes the act of actually writing your unit tests down improve their quality.

Recently I was asked to peer review a change in the sorting for a spreadsheet on one of our screens. The sorting was multi-column, along with some non-trivial rules for the sort order. Luckily I had a lot of domain knowledge for this part of the app. Here are the things I looked for when doing the review:
  • Were the variable names chosen to provide meaning?
  • Are there any "magic numbers" in the code?
  • Are tricky pieces documented with meaningful comments?
  • Was the code written so that it could be easily maintained?

In the end I had to go through two passes of peer review with the developer who coded the changes. The implementation worked. But the first cut was a maintenance nightmare. I tried to focus on the work product and not the developer. And I made sure the atmosphere was truly that of a peer assisting with quality improvement.

In the end, I think the routine turned out to be a solid piece of code. Only time will tell for sure. But I bet a new guy could come in, read the code, and understand what is going on without ever talking to the original developer.

Keep Alive

All our source code is managed by Rational Clearcase. This tool is a little more complicated than other packages I have worked with. But no big deal. We set up multiple "views" per developer to work with different source code repositories and versions.

Some time in the last year, our company's system administration group implemented a new policy on views:
  1. Views not used in 30 days are made unavailable
  2. Views not used in 60 days are deleted

I can understand the rationale (no pun) of this policy. The administrators are tired of having lots of unused views taking up resources. Fine with me. But my problems started when I needed some infrequently used views. Since they were not accessed in a while they did not initially work. I tried to recreate them, but all I get are errors. WTF?

After submitting a system administration trouble ticket, and escalating the issue up the management chain, I got a little help. Apparently the automatic view deletion script is buggy. They left my retired views in a permanently unusable state. Great.

Here is my plan to combat this unfortunate set of circumstances: Write a program to make sure none of my views get stale. I am a programmer after all. Muhahaha. I bet the other guys on the development team will find my prog handy. I shall call it "Keep Alive".

Performance Tuning

Recently I got a trouble ticket from a high-profile user. An application in our suite was having all kinds of performance problems. Normally I pass performance problems to our Performance Engineer. But I also do some upfront legwork to help isolate the source of the problem first.

The description in the trouble ticket did not make a whole lot of sense. Rather than waste time guessing about this, I called the user directly. During our chat I determined which user operations were slow, and how long each one took.

Initially I was unable to replicate the problem in a development environment. So I queried the Production database to check the volume. Aha! Production had over 50,000 records. My test set only had 1,000. So I wrote a small PL/SQL script to generate test data that matched the Production volume.

Now I was able to experience the performance problems in development. Next I did some profiling to figure out where the app was spending all the time. Turns out that adding, deleting, and editing records were very slow. But the delay was not in the SQL code. The problem stemmed from a poor desin. After each add/delete/edit, the whole data set was reloaded and reprocessed from the database.

I am currently regression testing smarter add, delete, and edit operations. Got to make sure I didn't break any functionality with the fix. So far a peer review of my first cut has shown that a couple things are broken.

Linker Errors

A developer on my team was trying to add new functionality to an Active Template Library (ATL) C++ project. Unfortunately he was a Java developer. He generated code with the ATL wizard to create a new class. But he got a lot of linker errors.

error LNK2001: unresolved external symbol

I was asked to help out. At first I did not know why he got the linker errors. But I asked some questions to get to the root cause of the problem.
  • Did he get a linker problem for all his functions?
  • What was the new code accessed?
  • What exactly does this specific linker error mean?
  • What other info did the linker provide on the errors?

It appeared that the functions names in his new class were getting mangled. The new code was part of a DLL. He was trying to use the code in an EXE. I asked if he made the EXE depend on the DLL. He did so that was not the problem.

I recommended the developer look up exporting C++ classes from DLLs in the MSDN help. He did not have MSDN installed locally. And wouldn't you know it? He could not access MSDN online either. So we googled "DLL export classes". First hit was DLLs Made Simple.

After putting __declspec(dllexport) in front of the class declaration, the linker problems went away. Maybe using DLLs are really simple. Asking the right questions and getting good technical information can make hard problems seem easy in the end.