Do you ever watch old black-and-white war movies? The weary soldier advances cautiously out of the brush. There's a clearing ahead: are there any land mines, or is it safe to cross? There aren't any indications that it's a minefield---no signs, barbed wire, or craters. The soldier pokes the ground ahead of him with his bayonet and winces, expecting an explosion. There isn't one. So he proceeds painstakingly through the field for a while, prodding and poking as he goes. Eventually, convinced that the field is safe, he straightens up and marches proudly forward, only to be blown to pieces.
The soldier's initial probes for mines revealed nothing, but this was merely lucky. He was led to a false conclusion---with disastrous results.
As developers, we also work in minefields. There are hundreds of traps just waiting to catch us each day. Remembering the soldier's tale, we should be wary of drawing false conclusions. We should avoid programming by coincidence---relying on luck and accidental successes---in favor of programming deliberately.
Suppose Fred is given a programming assignment. Fred types in some code, tries it, and it seems to work. Fred types in some more code, tries it, and it still seems to work. After several weeks of coding this way, the program suddenly stops working, and after hours of trying to fix it, he still doesn't know why. Fred may well spend a significant amount of time chasing this piece of code around without ever being able to fix it. No matter what he does, it just doesn't ever seem to work right.
Fred doesn't know why the code is failing because he didn't know why it worked in the first place. It seemed to work, given the limited ``testing'' that Fred did, but that was just a coincidence. Buoyed by false confidence, Fred charged ahead into oblivion. Now, most intelligent people may know someone like Fred, but we know better. We don't rely on coincidences---do we?
Sometimes we might. Sometimes it can be pretty easy to confuse a happy coincidence with a purposeful plan. Let's look at a few examples.
Accidents of implementation are things that happen simply because that's the way the code is currently written. You end up relying on undocumented error or boundary conditions.
Suppose you call a routine with bad data. The routine responds in a particular way, and you code based on that response. But the author didn't intend for the routine to work that way---it was never even considered. When the routine gets ``fixed,'' your code may break. In the most extreme case, the routine you called may not even be designed to do what you want, but it seems to work okay. Calling things in the wrong order, or in the wrong context, is a related problem.
paint(g); invalidate(); validate(); revalidate(); repaint(); paintImmediately(r);
Here it looks like Fred is desperately trying to get something out on the screen. But these routines were never designed to be called this way; although they seem to work, that's really just a coincidence.
To add insult to injury, when the component finally does get drawn, Fred won't try to go back and take out the spurious calls. ``It works now, better leave well enough alone....''
It's easy to be fooled by this line of thought. Why should you take the risk of messing with something that's working? Well, we can think of several reasons:
For code you write that others will call, the basic principles of good modularization and of hiding implementation behind small, well-documented interfaces can all help. A well-specified contract (see Design by Contract, page 101) can help eliminate misunderstandings.
For routines you call, rely only on documented behavior. If you can't, for whatever reason, then document your assumption well.
You can have ``accidents of context'' as well. Suppose you are writing a utility module. Just because you are currently coding for a GUI environment, does the module have to rely on a GUI being present? Are you relying on English-speaking users? Literate users? What else are you relying on that isn't guaranteed?
Coincidences can mislead at all levels---from generating requirements through to testing. Testing is particularly fraught with false causalities and coincidental outcomes. It's easy to assume that X causes Y, but as we said in Debugging, page 83: don't assume it, prove it.
At all levels, people operate with many assumptions in mind---but these assumptions are rarely documented and are often in conflict between different developers. Assumptions that aren't based on well-established facts are the bane of all projects.
Don't Program by Coincidence |
We want to spend less time churning out code, catch and fix errors as early in the development cycle as possible, and create fewer errors to begin with. It helps if we can program deliberately:
So next time something seems to work, but you don't know why, make sure it isn't just a coincidence.
fprintf(stderr,"Error, continue?"); gets(buf);
/* Truncate string to its last maxlen chars */ void string_tail(char *string, int maxlen) { int len = strlen(string); if (len > maxlen) { strcpy(string, string + (len - maxlen)); } }
public static void debug(String s) throws IOException { FileWriter fw = new FileWriter("debug.log", true); fw.write(s); fw.flush(); fw.close(); }
Extract from The Pragmatic Programmer Copyright © 2000 Addison Wesley Longman, Inc. Reproduced with permission.
Any trademarks are the properties of their owners