“Freakonomics” is a podcast where economists apply economic principles to social phenomena. This week’s podcast is on “prediction,” across a wide variety of domains — politics, sports, and even witchcraft.


The short version: while there are some things that are predictable, most of the things that we attempt to make predictions about (sports, stock prices, politics, the economy) are not predictable. 

What does this have to do with software? It should be pretty clear. Developers are asked to make predictions (estimates) all the time, with minimal information, in the presence of high levels of uncertainty and unpredictability. History has shown that we’re not very good at it. How can we get better? A better question is, can we get better? That is, is our chosen field a linear system, or a non-linear system? Are there elements of linearity within the chaos that we can nail down? Is there a “fudge factor” of chaos that we can factor in that will make our estimates better? 

In my opinion, if you’re in a shop that grinds out the same sort of application over and over again, using the same set of tools, and you have a good set of historic data of actual task durations, you’re probably going to be able to come up with estimates that will be pretty close to reality.

On the other hand, if you’re in a research and development shop, trying to do something that’s never been done before, you’re almost certain to run into issues that you haven’t thought of, and thus haven’t accounted for in your estimate.

In either case, user requirements inject an element of chaos as well. The developer walks a tricky line here. On the one hand, if you build a system that doesn’t do what the user needs it to do, it won’t be used. On the other hand, if your project is subject to a continual stream of requirements changes, you will find yourself rewriting the same code over and over again; in this scenario, it’s entirely possible that you’ll never get to a release.