Perverse incentives

If you want to get something done, you’re better off explaining the value in the result and leaving people to solve the problem for themselves. Telling people what to do to solve your problem and rewarding them for doing what you tell them turns them into robots, and they act with as much emotional commitment. Consider the case of perverse incentives, sometimes called the Cobra effect.

There is a famous myth1 where the British Raj in India was not pleased with the number of cobras around, so they decided to cull their numbers. The British, being the British, made two mistakes typical of their heritage. The first was to underestimate the capacity of thought of anyone not from Britain.

Rather than do the culling themselves, the Raj decided to outsource the problem. Instead of asking the local population to decrease the snake population using their initiative and rewarding them appropriately, they offered a cash incentive for bringing dead cobras to the Raj. The idea was to reduce the number of cobras by buying them until none were left.

However, people are smart. When told they could get paid for bringing cobras to the Raj, the locals decided upon the wonderfully counterproductive plan of breeding cobras specifically for selling to the Raj. And so, the natural cobra population stabilised. In addition, the Raj paid more each day with no visible impact. This went on for some time until either someone noticed what was happening or the Raj thought the approach wasn’t working, so they gave up and stopped paying for the cobras.

But that wasn’t the worst of it. The second mistake the British made was believing their decisions had no lasting impact. When they reversed their rules, they assumed things would go back to the way they were before. But the environment had changed. There were now quite a few cobra farms. When the Raj stopped buying cobras, the farms had no reason to keep operating, so they closed down. Rather than wasting time killing the cobras, the farmers stopped working the snake farms and set the cobras free. In effect, the Raj had accidentally paid to increase the population of cobras in the region.

The veracity of the story notwithstanding, the problem with reward mechanisms of this kind is that they do not consider how systems reconfigure themselves to take advantage of any newly introduced responses to stocks and flows.

Misaligned goals

Another problem can be when you try to improve things but forget your core goal. The Raj didn’t want cobras. Rewarding their presence in any location wasn’t aligned with their goal. When you want to remove something, look for ways to reinforce removal, not movement or arrival in another location. When you want to gain something, look for ways to reinforce increase, not borrowing, stealing, or having more than another, as this last example often leads to destruction.

An example in living memory for many programmers will be the Hacktoberfest T-shirt debacle. People were to be rewarded for making pull requests, not for improving the repositories. The co-ordinators of the event did not foresee the impact. Updates to README.md files and actively dangerous changes were suggested, all in the name of gaining a branded item of clothing. The sudden influx of low-quality pull-requests imposed additional work on many open-source projects’ maintainers.

If you reward programmers based on the number of lines of code they write, they will produce a lot of lines of code. Suppose you reward testers on the number of bugs they find. In that case, they will generate multiple bugs for the same issue and prioritise the types of work where entering new bugs happens more frequently over finding more difficult-to-notice bugs or regressing existing bugs. Just like the police, who have an arrest quota, might decide to patrol neighbourhoods known for higher crime levels so that they have a better chance of making their numbers. You don’t want testers to behave like the police of your application.

If you reward your programmers for fixing bugs, you might even notice them creating bugs so they can resolve them or closing bugs as fixed when they weren’t sure because fixing ten bugs at a 50% chance of success is better than solving three for sure as far as rewards go. That’s without entertaining the problem of programmer overconfidence in their debugging and fixing skills.

Align incentives

In summary, feedback is a powerful tool you must always keep in mind when looking to write policies or tactics for your strategies. A poorly conceived policy can backfire, and a misaligned tactic can worsen things in the long run.

  • Look to the core goal of your strategy and make sure that your incentives align with it.
  • Give the job of solving the problem to the people at the place where it can be resolved.
  • Make your goal transparent, and let others know why you desire it and what problem it solves.
  • Give others the ability to see the quality they bring by solving your problem well.

These actions allow autonomy and open the path to a sense of achievement and mastery of the problem. So, incentivise solving your problem, not exploiting the wording of your command. There are many ways to reward people for achieving your goal that require almost no investment on your part. For example, many people don’t want to merely solve a problem; they also want to tell others how awesome they are at doing it.

1

The famous (and apparently false) tale appears to have originated as an anecdote from Der Kobra Effekt, by Horst Sieber. I have not read it but heard the tale from multiple other sources.