Forkcasting
As clear as a puddle of mud

What is the value of an estimate?

What would you pay to know which projects to take on?

Let's start with a thought experiment. Someone says you can bet $100 on a loaded coin flip. You get $200 if you're right and lose your stake otherwise. You can also decline to bet.

Ignoring the the really deep questions like "why is this weirdo carrying around a loaded coin?", "if they're offering me a bet they expect to profit, right?", and "how do I get them to go away?" we continue with the experiment.

Noticing your discomfort, they offer you one additional service: you can pay $5 to see a flip.

We can immediately see one thing: We should never buy more than 20 flips. If we bought 21 flips, then won our bet, we'd have spent $105 to make $100. We're never going to make that up in volume.

Each flip tells us something about the loaded-ness of the coin. Even just one flip tells us it's not 100% Heads or 100% Tails. I.e. if it lands on heads we know it's not 100% Tails, and we are more confident that it's not very heavily biased towards tails, e.g. 99.5% Tails.

That said, even if the loaded coin came up Heads 20 times, you might still reasonably think the true loaded-ness of the coin is between 83% and 99%. Such is life. Plus the weirdo might be some sort of faustian Mathematician.

Now, since you're talking to this coin-flipping weirdo, you decide you're also a weirdo. You give a counter-offer: one bet, but you're allowed to see the initial state of the coin, record the sound of the flip, and then place your bet before seeing the coin. You reckon you can use this to work out how often it turned in the air, but you're not confident. Your fellow weirdo says yes, but you have to pay an extra $30 to do so.

Now you're wondering: Should I pay to see 6 flips or should I pay for my fancy method? If your method is a lot more precise, then it's probably worth it.

This is similar to software estimation methods: Estimation gives you information about the future, that Information has value, different estimation methods give different amounts of information and cost different amounts.

Let's make the analogy more explicit. You have a project and you want to know if it's going to be profitable. Assuming you know how much revenue the project will give you, you have a range of options: Use the cost of similar-looking projects, ask a developer to take an hour to break it down into tasks and multiply it by the historical average cost of a task, or call a 2 hour meeting with 5 devs to break the project down into tasks and run planning poker on each task.

Each of these increases in cost, and each gives you more information. <1> might take you 20 minutes, so we'll say the cost is around $33. <2> costs you 1 dev-hour (about $100), and then you need to look up the averages and do the arithmetic -- about 20 mins again for a total of $133. Finally, <3> costs you 10 dev hours, about $1,000.

If the total value of the project is less than $1000 and you choose <3>, then you've immediately lost money, just like buying 21 coin flips.

However, the really interesting bit comes in the difference between <1> and <2>. If you look over previous projects and <1> gave you similarly precise and accurate results as <2> [1], you'd prefer <1>!

There's another subtlty here: it depends on what decisions you're making. If you're deciding between project A and B, then the maximum value of your information is the opportunity cost (loss) of choosing A over B (or vice versa.) If you're using the information to make promises to a customer, then the value is related the customers' trust in you and your team. Different decisions have different values, so they may call for different methods.

You could reasonably object: I don't put a dollar value on every project, I don't calculate the odds, I don't put these decisions into numbers. That's fine. The principle still applies, even if you're not putting numbers to it. Knowing that you're paying for information is probably enough to let you see when your team is wasting time on estimates or which estimates have value.

You might also say that you don't want to re-analyse every project to work out the best estimation method based on the value, quality of the estimates, etc. That's also fine. As above, you can roughly categorise projects/tasks/etc. and lay out a general process, e.g. Projects worth around $10,000 get a task break down, we don't estimate individual tasks beyond the existing averages. Projects worth around $200,000 get a task break down, planning poker, etc. These rules of thumb don't need to be perfect, just good enough for gains to cover the loses.

Finally, it's worth remembering that action produces information. You can start on the work and cancel it it looks too expensive. For example, if your team has a "5 person 2 hour planning" meeting then you could spend 10 dev-hours building the thing. If it looks dumb at the 10 hour mark, then stop [2]. You're no worse off than if you'd spent time in the meeting [3].

In conclusion, next time you're doing several rounds of planning poker thinking "is this worth it?" you might be asking the right question. Oh, and beware Mathematicians bearing weird bets.

[1]This assumes the task list has no other value. If it has other value you can add that value to the overall value and make the same trade-off with different numbers.
[2]You open yourself up to the sunk cost fallacy here, so it's not a purely rational decision. There's a chance you'd coninue despite evidence saying you should stop just because you'd already spent 10 hours on it.
[3]-ish. Lead time matters too, but we're talking dev-hours here. Make it a 2-hour 5-person time box if you want to be really precise, it doesn't change the point of the argument.