Feedback

The combination of senses and responses is the basis for the core aspect of systems theory. Feedback emerges from these connections and is worth studying as an element in its own right. Feedback makes a system handle input and adjust rather than do what it’s told. It might be easier to explain this with a counterexample. When you plug the bath and run the tap, you are filling the liquid stock. There is no feedback, so the bath will fill at a constant rate until it overflows. If you turn off the tap and remove the plug, the bath will empty. If you want to reduce the water level to a particular height, you must monitor it and stop its flow once the specified height is achieved. Again, this bath example is not a system because it does not act in response to anything (well, perhaps except for the overflowing). It has to be manipulated directly.

The cistern example is a system that maintains a liquid level. This system reacts to the water level dropping, and increases flow. We know it senses because we can detect it responding to input. There is a negative feedback loop. As the gap between the current liquid level and the desired level increases, the flow increases proportionally to reverse the situation. In a steam engine, a governor releases steam to lower the pressure and stop the machine from running too fast. A kettle boils until the bimetallic strip bends enough to switch off the power to the heating element.

Most of these systems are systems of equilibrium. We call these negative feedback systems, as they are self-righting. Their design pushes back against increasing or decreasing stock by altering one or more flows to return it to an equilibrium point. Systems can have positive feedback loops, but these are generally unstable and unsustainable. For example, over-fishing reduces fish stocks, which leads to fewer new fish, which means any fishing is now over-fishing. The high-pitched sound of a microphone too close to its speaker is caused by the amplifier amplifying the sound it just played over a loudspeaker. The amount of ice on Earth affects the ability to reflect the sun’s rays; as it melts, the amount of heat captured by the land and oceans increases, increasing the average temperature of the planet and reducing the amount of ice on Earth. Positive feedback loops tend to cause runaway behaviour where the response needs to be quickly halted before it becomes uncontrollable.

So much of our lives is driven by the concept of feedback. Evolution is a feedback mechanism, and so is Scrum. Systems theory is all about feedback and the manner in which things react to each other. Christopher Alexander’s work crashed into feedback loops multiple times. At the core of all living processes lie mechanisms to create phenomena, sense them, and adjust future behaviour in response.

What does developer feedback look like? How about promotions, raises, bonuses, and other forms of reward? They are feedback, so how does a developer respond? Well, only rarely in the way the person inventing the reward mechanism intended. As mentioned before, a complex system responds as designed, not as intended. It follows the path of least resistance, fulfilling the wish as deviously as possible to gain the reward without expending any unnecessary effort.

When you praise or promote programmers for creating new features and rolling them out, you create a culture of making new things and ignoring maintenance issues. Old features are dropped in favour of bonus- or promotion-worthy work. If, instead, you reward programmers for fixing bugs, they spend less time protecting the code against future bugs and more time developing it in a way that bugs are minor but noticeable and easy to repair. Feedback is a powerful tool but it can create strange situations you don’t want.

What about code? Code has feedback too. When code is difficult to write and debug, it becomes hardened to change. Feedback from the code informs the developer it’s unsafe to make changes. Therefore, it stays unchanged. This makes it even less safe to change, reinforcing stasis until replacing it wholesale, cutting it out like an unwanted growth. Code that is frequently changed due to many small bugs gets changed a lot in the future. People will suspect it is the cause of new bugs. Without good tests, any change for the better will likely introduce new bugs, as the code was previously proven to be ripe for side effects. It becomes a self-fulfilling prophecy, which is a form of positive feedback loop.

Motivation

Complete builds take longer, so integration takes longer, so people only attempt a final build once a week. This positive feedback loop can snowball as the reaction to long builds makes them longer. A weekly build means there is more to integrate, so that begins to take more than a day, so you make it fortnightly. Now there’s more to integrate, so builds take a few days to prepare, so you make it monthly, and on it goes until you have annual releases.

Client feedback on a project, if it’s overly negative or personal, will not be taken gracefully. This leads to wanting feedback less often, so you wait for longer between feedback meetings. The longer the gap between sessions, the more you invest before determining whether you took the right direction. The more changes they review, the higher the probability of a mistake of judgement or interpretation of your client’s needs. This will displease them, leading to worse discussions—another negativity-based positive feedback loop.

Positive feedback loops need not always be about bad things. When you pick up a new tool or library, your work improves. This makes you feel good about picking up new tools, so you do it more. The loop is positive and beneficial.

Systems theory gives us tools for how to think about motivating people. Motivation by reward is a form of feedback loop. You are motivated by the reward to do a thing; your response is to do the thing. If I want you to do a thing, how do I reward you so that you do what I want you to do? Rewards and punishment work well for some limited situations. However, research into what drives people to do great work has proven this type of motivation to be complicated and fraught with dangerous consequences if done wrong.

Paying people to work only succeeds to a certain extent, and trying to get more out of people by promising bonuses and extra rewards beyond what they need tends to backfire. Frederick Herzberg’s two-factor theory of workplace satisfaction suggests that money and other extrinsic motivators only work as negative motivators: if it’s missing, it acts as a disincentive. It doesn’t act as an incentive to work, but a lack of it is a reason to down tools. Money is a requirement, not a bonus.

The other problem with extrinsic motivators is how they generally don’t get people to act honestly. When you reward people with money for doing something, and they want the money, they can tend towards meeting the letter of the agreement over the intent of the deal—just as we saw with building contractors using the cheapest materials within design tolerance.