Slowly revealing the solution to a complex problem

Solving the puzzle of how to make something is one challenge, but knowing what to make is another one altogether. A customer could be unaware of what they need or be unable to explain it. The form of what you need to build might not even exist. In these cases, your development process could benefit from mimicking Christopher Alexander’s methods.

Many complex products are like this. Neither the user nor the customer will know what they want but they will know what they don’t like. This kind of negative-only feedback is effective, yet it can create expensive problems for those who are unprepared to work this way. Alexander always worked with a customer, resorting to keeping one in mind if they were absent similarly to how developers use personas to guide their work. Instead of guessing what they wanted, his processes were more keenly tuned to provide the things that would give them a more wholesome life. His team would take a deeper look at the patterns of the lives of those who were due to inhabit the buildings and sites once the development had ended. It’s these patterns of life that he observed most closely[Notes64]. The actions taken, the reasons why, and the values behind them all played a part in these analyses. They allowed for plans which not only satisfied the customers but also delighted them[NoO3-05].

Complexity is a word with a loose meaning in common parlance; however, there is a specific meaning of the term that Christopher Alexander managed to tame. It is therefore worth explaining what the word should mean to anyone interested in his solution. We may have some idea of the term when we consider complex projects; for example, they might include formidable problems or things made of many parts. However, I want to more concretely define the term and bind it to one particular way of thinking before using the word as much as I will in later chapters.

I encountered a better way to judge complexity when watching a talk by Rich Hickey entitled ‘Simple Made Easy’1. Everyone can benefit from watching the lecture, as the content is broadly applicable, even if you don’t know anything about Clojure. (I certainly don’t.)

Complexity and simplicity are not concerned with how difficult or large a system is but how independent the inputs are from each other. A system with a thousand inputs and a thousand outputs, all wired one-to-one, is simple. However, a system with only two or three inputs, with every output depending on every input, becomes complex. One example is a throttle and clutch-driven car in which the movement force is a product of both the engine speed and the clutch commanded by a dynamic balancing act. There are only two inputs and one output, but anyone who has driven a clutched drivetrain vehicle can attest to the fact that it takes some practice to learn the complex interaction of those two inputs.

In software projects, the metric for complexity can be the number of things that need to change in response to any change you intend to make. You can measure at the code level and count the variables driving an output; those coupled together like a throttle and clutch.

Complexity can arise from technological choices. Lazy evaluation can have complex performance implications, and the use of smart pointers can entangle object lifetimes. Concurrency primitives for locking shared resources can deadlock at the most unexpected of times.

All of these have something in common—the elements alone are not a problem, in and of themselves, but challenging situations emerge from their relationships.

Now we have defined complexity, we need to specify how we will use the word ‘complicated’. Something with many parts can be complicated. A complicated sequence or set of related things must be connected in a specific order. The elements or steps only interact in so much as they depend on each other in a strict ordering in time or space. When you change one part of a complicated system, you have a small surface area for change propagation. There is a linear chain of effects. The effect of a change does not ripple back around to the element instigating the change, and the changes in the ripple are simple but frequently tedious (see protocols, build sequences, taxes and expenses). They are all complicated processes, but they are often predictable systems. They are knowable. It’s possible to accurately estimate the total impact of your input before you have seen the output.

Given this definition of complexity, we can now pose some questions. First, how does one tame complexity in a project? Second, what did Christopher Alexander do to rein in the complexity inherent in a building project? Finally, do these measures align with software development and Agile principles?

Regarding the first question, is it even possible to tame complexity? The answer is yes to some extent, and one way is documented in Notes on the Synthesis of Form[Notes64]. Alexander’s approach to building was an ever-evolving version of this first work. It included up-front requirements gathering and a discovery process for deriving final forms by concentrating on how things were coupled, which involved explicitly shying away from grouping them by the categories with which we might naturally associate them.

This certainly does not sound like the agile development practices we know. It also does not appear to address complexity. But let’s review for a moment. Christopher Alexander tamed complexity by understanding what it was. It was the coupling of changes. So, rather than creating a hierarchical architectural plan where each part finds a place in a group with others based on our prejudices, he grouped them by the strength of their change coupling alone. He would continually ask questions like, “If I change this part of the design, will that other part have to change too? And if so, does it improve with it, or does it make things worse for it?”

The Notes on the Synthesis of Form approach to architecture includes the following steps:

  • Gathering all the elements that must be considered as part of the design
  • Revealing connections based on their complexity—how they affect one another
  • Finding the weakest-linked large groups, splitting them into separate conceptual groups, and finding a way to name or make a diagram of them.
  • Within each group, repeating the search for the weakest connections and splitting them again.
  • Continuing the splitting process like this until all solutions for all subsets of elements seem trivial.

The above process was recursive. Large cutting lines were formulated first, and then those groups were cut up in the same fashion. In this way, his early work revealed connected pieces of our environment. As he worked on more projects, he found some connections occurring repeatedly. Consistently naming these recurring situations led to the discovery of patterns. With patterns, he could resolve parts of a complex system without the burden of the concrete problem biasing the solution. He could now avoid the XY problem, being wise to the deeper need hidden beneath a shallow problem definition.

Many recurring collections of coupled elements tended to distil down to only a few happy, balanced solutions. However, these patterns were not actually solutions; they were the common properties of the organisations of elements solving those problems. The patterns were the pairing of the problems to be solved and what would be true for any good quality solution.

In a complex project, if you reduce the number of connections between elements or make the relationships visible, adjustments and estimates become easier to predict and contain. Changes in a project based on preconceived conceptual groupings cause cascading effects and induce further stresses throughout the system.

Adjusting the style of a window frame can affect whether a window can provide enough light to make it worth installing in the first place. For example, with the advent of UPVC windows, given the requirement for approximately 10cm of border around the glazing of an openable window, it is hardly worth the effort to build window spaces less than 40cm wide.

Furthermore, a style change can affect the number of bricks needed or whether a wall needs a reinforced steel lintel. Physical limits frequently dictate whether a design is valid or worthwhile. The availability of materials should too, but so often you see examples of people shipping halfway across the world just to have precisely what they ordered. In a sense, this precision and modularity based approach could be considered environmentally unfriendly.

With code, we see similar patterns of complication. You might include a global or a Singleton to allow access to a unique feature of your platform, but then later, a revelation requires you to offer two or more of these (e.g. after-the-fact testing suddenly being necessary as part of a security or safety audit). Now, everything which accessed resources through a Singleton or global will need to find a new way to pass a context so it can act and react according to the test bench setup.

Using reality to reveal coupling

Christopher Alexander’s solution to discovering how elements interact was to be at the site. He used many mock-ups with bricks placed dry, cardboard constructs and painted paper strapped to wooden frames, as well as anything else his team could do to trial an idea before it had to be, quite literally, set in concrete. This is like prototyping for software developers. Proofs of concept take us from untested hopes to feasible options or moments of clarity and despair.

Christopher Alexander used mock-ups to discover what you can’t extrapolate from paper designs, including how objects inside and out obscure or bounce light. The choice of an angle and colour combination may affect your mood but go unnoticed on a paper plan. How a view looks through a window may cause people to linger there, and so we see more passing space is required to navigate around those captivated by the vista.

We engage in a similar behaviour when doing exploratory programming, trying out ideas in the source rather than relying on paper designs or mental modelling. We make quick changes to prove that an idea might work in practice. Then we roll it all back and start properly. Using a spike like this shows us how the development will progress if we commit to it. We can identify how it might feel to build upon it later or even how it might be for the end user via a UX mock-up.

The most pertinent aspect is that the speculative activity should be done in the place itself, the actual site where the final work will eventually be committed. This means that you get to see how it impacts the final form, rather than just guessing at the impact. Unlike how when you work away from the main area and integrate at the last minute you learn of the problems quite late. The value of this activity is that you are always building towards gaining hindsight; effectively, you are already wiser before you lay the first stone.

In Domain Driven Design[DDD04], Eric Evans claims that domain models undergo many iterations and name changes before finding a ‘deep model’. As you internalise the core structure of the problems you face, you see nuances in the description of the problem. Those moments of recognition are a form of acquired discernment. When you can perceive these new differences, you also recognise the need for a different reaction to the phenomena. A different interpretation of the world begets a different response to it, and before you can discern a difference, there is no way you can incorporate it in your design. We cannot rely on hindsight. We must depend on many iterations, as it is the only way to see what should have been obvious all along.

We often only see the shape of the solution as we draw very close to the end of a project. The form of the problem is similarly revealed quite late. This is not a coincidence. We would not need to iterate if we fully understood the problem we were trying to solve or all of the problems inherent in any solution—if it were, software development would simply be a matter of data entry.

Development as a puzzle

You can compare this to puzzles. When you first encounter a puzzle, you are given it in literal form. Whether it be a word puzzle, a wooden or metal toy, a puzzle cube or a sudoku, when you first encounter it, you only have a simple concept of what the outcome should resemble. This is much like your customer’s goal. You have yet to learn how to get there but can visualise the final state.

The more you poke around at the puzzle, the more you learn about it. You uncover the sub-problems and generate local solutions. You’re no longer solving the original problem; you’re solving the steps towards it and generating new words for states you see and actions you can take. This is very much like programming, which is why people often say you need to plan again once you know more. They mean that you need to replan once you understand more about the problem and its sub-problems and have metrics for the value of the sub-problems’ solutions.

We work through puzzles by learning about the contexts around them and finding features of the problem we can fully understand. We can solve the complete puzzle only when we grok the smaller parts. Sometimes, we throw a lot of apparent progress away, which can be disheartening for some people; nevertheless, deleting code is not throwing away progress. Progress is accumulated in the knowledge of the people solving the problem and should therefore be measured in how fully a problem is understood. So you should regularly review the problem domain thoroughly and as early as possible.

You need to highlight all the sticking points and sub-problems and find a way to use those vantage points to see what you need to see. Sometimes, you will only know about a sub-problem once you have solved a different one. You will have to deal with many unknown unknowns in your puzzle or project, which is why collecting requirements must also be iterative. In Domain Driven Design[DDD04], Eric Evans suggests you should go back to the domain experts multiple times while developing a solution, not because they have new information for you, but rather because you can finally receive the information overlooked during previous interviews. It is only intelligible to your senses after you have built the requisite mental scaffolding to support the nuances of their statements.

This agile approach to requirements means a requirement can suddenly appear. It may have been misinterpreted, misunderstood, or ignored the first or second time. Later, due to an awareness of interconnected details and revelations from newly acquired discernment abilities within the mind of the software developer, it can suddenly appear to be a critical element within the grander scheme of things. When we listen and write things down, we only write down what we can understand. We cannot capture requirements that go unmentioned because they are too obvious to the person relaying them. We are also oblivious to what we fail to write down because we take those elements for granted.

To detect these unknown unknowns, Christopher Alexander worked within the domain whenever possible. Meanwhile, agile developers try capturing them by asking additional questions about each area. They can’t look out of a cardboard window, but they can ask whether a report is essential and what the business impact would be if it was delayed or otherwise unavailable. Sometimes, a tiny thing can have a huge impact. For example, someone might say, ‘… and then we email the report to three people’. You could interpret this as them needing the ability to email a report, but when you dig further, you find out that it is a release requirement. For example, the report is evidence required during a safety audit and must be archived and delivered with the build, but the other two emails are for interested managers or just because the person in charge of CI wanted to set up a liveness checker based on it. You must ask, “Why is that important to you?” to get these nuanced answers, which subsequently lead to entirely different requirements.

Given that revelation is part of any worthy process, it’s clear that the classic waterfall model of software development is demonstrably wrong. At least, the presentation of its process seems that way. We want to avoid that amount of up-front, context-free architecture, and integrating mock-ups and prototypes into the process will lead to some of the best and highest-impact design revelations.

However, many agile processes are unfavourable for different reasons, as they exhibit urgency problems. Urgent, essential tasks should be undertaken immediately. Important, non-urgent tasks should override urgent non-important tasks, but agile processes often prioritise work items the other way around. They select the most urgent thing at each stage until they blow up in larger projects.

Some will argue that this is not true as you should realise an important task is urgent as it’s not being addressed, but this implies that you have mature members on your team who are able to signal the problem ahead of the upcoming critical moment and filter out tasks that are not important. So, most agile practices lead to good outcomes, but this is only because they reintroduce features that were removed in the unwavering pursuit of ultimate efficiency.

Christopher Alexander’s approach to urgency was different. For him, decisions must be made and a problem fully understood before any irreversible actions were taken. Urgent features lead to discovery and help in making decisions. They are always about ensuring you make the right decision before it matters. As his process matured, the number of urgent tasks declined as the design patterns process for architecture led to decisions being made as part of a good generative sequence and before any money could be spent on the wrong things.

The difference here is that agility, as it is practised, does not separate the design and problem-resolving stage from the building phase. To some extent, that is the nature of software development. Proofs of concept tend to become final products, and prototypes continue to gain more features until they become shipped software. As the mock-ups of software are interactive, there’s a tendency to think it’s possible to polish them up into the final product. Unlike the cardboard and paper mock-ups of Christopher Alexander, from the outside, proof-of-concept code looks the same as the final form.

If there were no way to realistically ship prototypes—i.e., if there were such a thing as cardboard code—then the processes we put in place in the name of Agile would not be as severely misused. If we could use agile development practices to become better prepared, we could slowly reveal the solution to the complex problem and only pour the concrete when we believe we are ready. I have experienced a reasonable amount of success by actively choosing the wrong language to develop code in, purely so I can be sure it remains a prototype. If you know you need performance, write it in Python. If you know it needs to run everywhere eventually, write the code in Swift, F#, or some other single or limited platform language. This is a strategy I have employed to avoid needing to be good at politics and still escape the technical problems caused by trying to build a final product on the frame of a prototype.

1

A link to one of the versions of the talk can be found here: https://www.infoq.com/presentations/Simple-Made-Easy/