Introduction

This book is for curious software developers of all types. It doesn’t matter which language you use and the industry you work in is irrelevant; what does matter is your interest in why things are the way they are. You will find much more than object-oriented design patterns here as the problem we face is far broader.

This book is for developers who have heard of, used, or even read up on design patterns but think something is missing or needs correcting. It is also for those who wish to know why this can still be the case. Furthermore, it is for those individuals who love design patterns and want to know how to extend their benefits. This book is for developers with little to no experience in design patterns who wish to avoid their pitfalls, and it can benefit anyone who has worked with patterns for a while but was surprised when their purported values did not emerge. It is for those who like the design on the cover and think it would look nice on their bookshelf.

In these pages, I hope you will find practical tips and useful takeaways, but this is not a how-to manual. Instead, think of this as a book that explains the principles you can use in the following ways:

  • To build a toolkit of techniques to dissect existing patterns
  • To learn how you can better repair broken things when you see them
  • To uncover ways to correctly identify whether they are broken in the first place.

In short, if you have opinions on design patterns, this book can help you justify them. If you don’t, you might just begin to develop some. Additionally, as a bonus, you will discover how to get the most out of them, regardless of their flaws.

As the story unfolds, the foremost players will be an architect, a small group of software developers, a movement populated by hundreds of developers, and the way in which the world reinforces our actions.

The architect

Christopher Alexander was the architect. He was the central character for the initial discovery of patterns and set the course of pattern history. Although he wasn’t a software developer by any means, he was fascinated by the possibilities presented by developing software using a pattern-language approach. His book A Pattern Language[APL77] became famous among software developers and was the basis for the formation of design patterns in software. However, his story takes its first and most consequential step with the much earlier book Notes on the Synthesis of Form[Notes64], which was written back in 1964 when he was still an academic.

The software developers

A small group of software developers became known as the Gang of Four, often abbreviated to the GoF. Together, in 1994, Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides produced an incredibly successful book called Design Patterns: Elements of Reusable Object-Oriented Software[GoF94], which detailed 23 software design patterns for object-oriented programmers to learn and use. This best-selling work is often called the ‘GoF book’, and that is how I shall refer to it throughout this work.

The movement

The software-design-patterns movement was real and still exists to this day, but it is now far more low-key. At its height, hundreds of developers contributed to it and spread the word to anyone who would listen. In addition, the movement rippled out into other areas of development, such as education, and organisational change. As the hype receded, only a few remained faithful to the broader pattern movement.

The world

The last component of the story is the world itself and how natural laws of emergent behaviour clashed with many of the expectations and needs of those involved. This final element—the emergent behaviour of complex systems—brings us back to the start. Christopher Alexander’s first work sought to tame complexity in intensely interconnected projects and guide the emergent properties of larger systems to positive conclusions.

Our journey

During this voyage, I must reference patterns from software development and physical architecture. The majority of the physical architecture patterns identified by Christopher Alexander and his colleagues are from the book A Pattern Language. As they are numbered in that book, I will reference them as follows: 202 Built-in Seats. The software design patterns in the GoF book do not have numbers, so I will reference them in the following manner: Strategy. Not to spoil the surprise, but other software design patterns will also be referenced in the same way. At the end of the book, there are summaries of most of the patterns that are mentioned to help those less familiar with the subject; however, they are my summations, so you may still want to obtain a copy of each book to fully comprehend the patterns held within.

Before we begin the journey, we should take a look at the road map. In the rest of this introduction, I intend to highlight the kinds of questions that I later attempt to answer—this is so that you know where we’re going. The terrain will get a little rough at times—and there are no well-worn tracks in many places; unfortunately, this means that it will be necessary to explain some of the historical and theoretical elements of the topic along the way. I won’t try to convince you of anything nor attempt to directly answer any questions in the remainder of this introduction; only clarify what I intend to address by the end of the book.

Which patterns?

We should start with a simple question for software engineers who already know about design patterns: how many design patterns are there? Most would give an answer of 22 or 23, depending on whether they’re brave enough to include the Interpreter pattern. Indeed, this is how many there were in the original GoF book[GoF94]. And, if you do a web search for design patterns, some of the suggested People also ask questions I have observed were:

  • What are the 23 design patterns?
  • What are the 3 basic categories for design patterns?

However, most software engineers (and leading search engines) get this figure wrong, as there are many more. If you’re in the camp that thought there was a higher number, you might be thinking about the Pattern-Oriented Software Architecture series of books1, the Pattern Languages of Program Design(PLoPD) series2, or other books like Game Programming Patterns[GPP11]. In fact, there are many domain-specific software design pattern books, so maybe you were thinking about something more modern, such as Node.js Design Patterns[NODEJS16], the patterns from Cloud Native Transformation[CNT19], or Machine Learning Design Patterns[MLDP20].

Nevertheless, these books only cover a few hundred examples. Unless your answer was in the thousands, you were still way off. In fact, you would have been conservative in your estimate in the year 2000; if you only include the list of published patterns, the number already exceeded a thousand3 as is evidenced in those collected in The Pattern Almanac 2000[Almanac00], a book published to help track the currently recognised and widely used patterns in software development.

This raises another issue. The almanac only accounted for the patterns discovered and published in and around software development. Many people forget that other areas are equally important to software developers:

  • Learning and teaching patterns.
  • Patterns of people management and organisations.
  • Patterns of bringing about change.

The belief that patterns are limited to software engineering, restricted solely to the domain of object-oriented design, and constrained to patterns of implementation, is an incredibly narrow viewpoint. It would be like claiming you know how to cook when your culinary expertise only stretches to five different ways to make eggs on toast.

So why do we collectively believe there are only 22 or 23 design patterns for software when there are actually so many others out there? Moreover, if so many patterns exist and more are being found all the time, where are they? I will answer these questions and also explain how this state of affairs was somewhat inevitable.

There was a place—the original wiki—which kept track of patterns and the conversations around them. If you have heard of this site, which is known as the Portland Pattern Repository, you may also know that it hasn’t been updated in some time. Why is this the case? There is also the Hillside Group, which maintains a website with many (often dead) links, but they’re not as famous as the GoF book[GoF94]. Why does an exhaustive pattern catalogue for everyone to use not exist?

While a definitive answer explaining why this didn’t happen in the past would be impossible to provide, I will reveal the many forces at play that continue to make it an unlikely event. You will learn how those forces affect the established software development practices but also those processes and methodologies only recently adopted. In addition, you will learn how physical architecture and building development are similarly affected.

1

The Pattern-Oriented Software Architecture (POSA) series includes Volume 1: A System of Patterns[POSA96], Volume 2: Patterns for Concurrent and Networked Objects[POSA2-00], Volume 3: Patterns for Resource Management[POSA3-04], Volume 4: A Pattern Language for Distributed Computing[POSA4-07], and Volume 5: On Patterns and Pattern Languages[POSA5-07]. The final book in this series raises questions similar to those I raise in this book.

2

The Pattern Language of Program Design[PLoPD95] series includes five books with a large gap between volumes 4 and 5.

3

I counted 1007, but there are quite a few, so I could be wrong.

Language

If you look at any modern and reasonably sized piece of software, you will surely come across something with the word factory in the name. You might also find a few singletons in the code. The larger the codebase, the more likely it is that you will see recognisable object-oriented design patterns. You may even note a query_decorator, a list_iterator, a status_observer, and possibly even a page_builder or two. These pattern names have become thoroughly entrenched in our software development language.

Though not as widely studied these days, design patterns are regularly used by people who have yet to see a copy of the GoF book[GoF94]. The names of these patterns are baked into tutorials, how-to guides, and example code, all of which people copy and paste into their projects.

Design patterns first appeared in Notes on the Synthesis of Form[Notes64] as a byproduct of Christopher Alexander’s efforts to find a way to reduce the complexity of large-scale building projects. We will go into the details later, but a feature he saw in them at the time was how they formed a language. This sense of it being a language was only made clear in the books he published after his time in academia, A Pattern Language[APL77] and The Timeless Way of Building[TTWoB79]. Software design pattern terms becoming so ubi¬quitous affirms this conclusion.

However, there is a downside to the language-like qualities of patterns; they become less useful in the long run. The benefits of structure and communicability also have their pitfalls, as they lead to a loss of essence when referred to by an arbitrary handle.

As an example of the weakness of words, what does the word ‘literally’ mean to people now? In the past, it used to mean ‘real’ and ‘actual’, not the imprecise term it has now become. Additionally, what does ‘done’ mean to the people you work with on different projects? Language evolves. Sarcasm aside, words and their meanings are relative to the environment and the participants in that environment. We need to recognise that definitions are not eternal and immobile—but rather cultural and flexible.

GoF Visitor The GoF Visitor.

The books published in the 1990s were full of recognised patterns—and pattern authors gave names to each of them. The names stuck around, but our understanding of what those patterns mean has drifted. When we copy patterns without completely understanding them, their meaning mutates. If we copy a visitor pattern from an implementation that walks a structure of composite objects, but we fail to see the details of the typed callbacks, the pattern word will now mean something else. We can be forgiven this as even the GoF referred to the Visitor as the ‘Walker’ before the book was published1. So, what do all those GoF names mean now?

Misunderstood Visitor A Walking Pattern.

Beyond the individual names of patterns, nowadays we regularly exchange the term design pattern with technique or UX design-systems. Some authors will add ‘design patterns’ to the title of a book or webpage to instil a sense of validity to the content delivered. It is intensely dissatisfying to observe how anything that indicates a property will inevitably be mimicked and ruined; from black-and-yellow striped but non-threatening insects to five-star review bots. However, why is this dilution of meaning not seen as a problem by those involved in the patterns movement? Worse still, when it is recognised as a problem, why is it seen as somehow unavoidable?

Aside from co-opting the term, UX design patterns exhibit another problem. Even though some people complain that the GoF patterns are idiomatic, UX patterns suffer from the same flaw to a much greater extent. I explain in more detail later what idiomatic means for patterns, but compared to authentic pattern forms, idioms are less powerful, less adaptable, and often unfalsifiable.

Indeed, UX patterns don’t fit into the original definition by Christopher Alexander, but does this matter? Are idioms just as useful in practice? Given how the usage of the term has changed, what is the value in trying to fix anything about design patterns if they don’t exist anymore, except in the form of a book and some recurring names?

I will answer these questions and explore how to regain some of the benefits of patterns without requiring the whole system to be overhauled.

1

Naming is hard. ‘Our names for the patterns have changed a little along the way. “Wrapper” became “Decorator,” “Glue” became “Facade,” “Solitaire” became “Singleton,” and “Walker” became “Visitor.”’[GoF94] p. 353. We shall see later how Christopher Alexander approached this problem through the use of diagrams.

The GoF book

There is also the question of the content and form of the GoF book[GoF94]. First published in 1994, it’s stood the test of time for a work on computer programming. No book other than Clean Code[CC08] seems to have sold nearly as many copies while also still being regularly recommended to new generations of interns and graduates. However, there’s never been an update. Even a later book on the same subject, Head-First Design Patterns[HFDP04], has a second edition. So, the first question must be, is there anything wrong with that? Are design patterns eternal, meaning the book does not need an update?1 Given the number of patterns, this cannot be right, can it? Therefore, we’re left with the questions: what needs updating, and what should an update look like?

There is also the question of why other books on patterns are not in the top ten of best-seller lists for software design and architecture. I had to get down into the 20s before I found another title at the time of writing. In the lists I saw, which offered rankings2 in accordance with estimated sales figures, Game Programming Patterns[GPP11] was only at position 27. If there’s such a glut of patterns and books on patterns, how come they sell in significantly different quantities? Did the GoF book’s success drown out other design pattern sources?

Some people have asked whether the GoF patterns are design patterns at all. In fact, more experienced people than me have called them out as idioms—while some individuals have stated the GoF’s patterns are not found in actual use anywhere3. Others claim their use only began because of the paper and the book—turning them into a kind of self-fulfilling prophecy. Why do people think the book’s patterns aren’t actually patterns? If this is true, what are they then? And what would an actual pattern look like?

Finally, there’s the question of form. There are some concerns regarding the way the patterns are structured in the book; some of them did not make complete or obvious sense when contorted into this mould, making them less practical than they could have been. The work also observes that there are significant differences4 to Christopher Alexander’s patterns and does not strictly adhere to the principles of his pattern language. This begs the following questions: why do some people think the movement could have been hurt by setting a theme and structure for patterns? What harm did this literary structure do to the power of patterns as both a device and a movement?

1

Even the original GoF believe some patterns need updating and others should be added, so realistically, the answer is a simple no.

2

One site was https://bookauthority.org/books/best-selling-programming-books; the other was digging into Amazon sales rankings myself. The rankings change regularly. Sometimes, there are no other design patterns books in the top 20. Other times, it seems Patterns of Enterprise Application Architecture[PoEAA03] can make an appearance.

3

This particular point was written up by David Budgen in a study at Durham University https://www.infoq.com/articles/design-patterns-magic-or-myth/

4

Page 356 mentions Alexander’s patterns have an order, emphasise the problems rather than solutions, and have a generative capacity. On the following page, they admit the work is not a pattern language but ‘just a collection of related patterns’.

Systems theory

Central to the history of design patterns was a drive to tame the complexity inherent in architecture, both in terms of physical construction and software. Both forms of development deal with sequences of constructive actions with continuously interacting elements. Interactions between elements, which in turn create emergent behaviour, are what systems theory is all about. So, given how linked together they are, why do we fail to teach systems theory alongside design patterns?

We can use systems theory to analyse questions regarding the larger systems at play in the design pattern space, not only in terms of the patterns themselves but also the development of the theory of patterns. Moreover, we can examine the systemic problems caused by the current processes for finding, publishing and their eventual use. There were certain forces at play that drove the adoption of patterns, yet others have diverted them toward a different form. The pattern movement changed over time, and systems theory helps to explain and predict some of the otherwise unexpected outcomes.

In later works by Christopher Alexander1, it becomes apparent that systems of feedback between larger social entities—such as the pattern movement, the software development industry, and the construction industry—led to some far-reaching consequences. These outcomes range from the mundane, including mass selling-out, to the unnerving reality of corruption, bribery, and death threats. Systems theory will help us to frame these events so that we can see them coming before they arrive and avoid similar problems in the future.

1

Many of Alexander’s later works touch on systems theory without being explicit. The Nature of Order[NoO1-01] and The Battle for the Life and Beauty of the Earth[Battle12] for sure, but also even in The Production of Houses[TPoH85].

Unfulfilled potential

Design patterns seemed like a fantastic thing when they first became popular. Their promise of improved code reusability was—and still is—vital to many people. However, we need to know why that potential was not realised. Given how many patterns there are, with all their combined, compiled wisdom, why were they so ineffective as a form of knowledge transfer? This point is not debatable; as they stand, they are ineffective. The lacklustre level of adoption of any but the GoF patterns in mainstream development practices is a clearer testament to the inefficacy of the movement than any statement I could make.

Large-scale, complex software requires considerable effort to get right. The work of Christopher Alexander was all about managing the complexity of a different domain. What happened to that aspect of design patterns during the concept’s infiltration into software design patterns? Why haven’t they helped to tame the complexity? Was the initial response of the software industry neglectful of his work? What can we do to put ourselves back on track? Do we need to go back further and start again? In other words, should we walk the same path as he did to derive our own truer, more fitting form of patterns for software development?

Tracing the history of how the design pattern movement affected buildings and architecture reveals another parallel with software development. When and where the patterns processes were allowed to run their course, you can see how they created significantly better final results than the alternative—the default approach of contracts and paper design. However, the determination of what is better is a value judgement from a particular perspective. According to systems theory, the world stood in the way of what was locally good because the system above viewed Christopher Alexander’s work through a different lens—a lens of power.

That skewed interpretation remains as much alive in software design pattern related activities as it does with regard the physical building space. Inevitably, this leads to similar obstructions and corruptions of the process. Is something actively stopping design patterns from achieving their potential? If so, can we do anything about it?

Where next?

I hope I have successfully whetted your appetite with these questions or, at the very least, helped you decide whether this book is for you. Either way, I have outlined the premise to be explored in these pages. My research is incomplete, and I wonder if it will ever end, as this topic is significantly broader than I initially believed. I continue to dig further as I write this first edition and will likely produce a follow-up edition a decade or so from now. If not, I hope this book will explain why I, like the GoF, never got around to it.

What I have learnt has changed my mind multiple times. I used to be a staunch design pattern sceptic. I believed patterns to be useless cargo-cult content. However, I now find them fascinating and useful but ultimately unfinished. There is much work to be done to get them to where they could be, so I lay out the measures that need to be taken. This is not a call to arms but rather a projection of what the future may hold if I, or others, engage in the necessary work. There is promise there, but no promises, I’m afraid.

Now, we must start at the beginning. A full description of the origin of the form is absent from so much of the literature on design patterns. However, as is so often the case, history holds the key to understanding. It will help unlock their potential and reveal the reasons why things went wrong for design patterns in the way they did.

The Link to Agile

To fully understand design patterns, it’s useful to trace how they evolved through the different domains of their existence and how each domain affected and was affected by design patterns. However, you’re not here for a history lesson. You’re a software developer who wants to know the relevant details with some fundamentals to back up the claims. This is why we will take a shortcut to the point where the most striking similarities between the two design patterns movements—in physical architecture and software development—seem to have materialised.

Agile, as it is understood these days, is a non-process where software is developed and deployed according to the Agile Manifesto 1[AM01]. The core values can be stated as follows: preferring to take responsibility; delivering useful things directly to the end user; and actively seeking out and adjusting the processes based on new information. These ideals of product design and delivery are evident to us now. However, they were also obvious a couple of hundred years ago—so something clearly went wrong along the way.

At the same time as the design patterns movement in software, other changes were brewing. The concepts at the core of Scrum can be traced back through published works on the Portland Pattern Repository or found in Pattern Languages of Program Design[PLoPD95] (from now on referred to as PLoPD) under the heading ‘Episodes’. Yet they go back even further than that. Most of the salient features of Scrum’s developer empowerment appear in Episodes, and many of the features of the Agile Manifesto can be traced back to elements that were present in those and even earlier works.

It’s not that the Agile principles came from design patterns, nor did design patterns come from Agile, but they both appear to stem from the mood of the times. Both emerged from similar feelings that there had to be a better way to develop software. A backlash formed from the notion that we could increase quality by moving decision-making closer to the place where the effect of those actions could be observed.

Christopher Alexander had shown the value of looking to the recipient and user of the product as a guiding force for decision-making; in some cases, even a worker on the building site would be deeply involved. Agile development is an attempt to bring the customer closer, even going as far as to suggest including them in live tests during development. Alexander’s work on the processes of using patterns to help guide production mirrored many aspects of the worker empowerment found in Episodes and Scrum.

The Agile Manifesto, extreme programming2, design patterns, Scrum3, and even to some extent, the notion of User Experience4, all appear to originate from the same period where people were bouncing related ideas around and communicating with fresh and wide-open eyes at a time of client-centred thought. Although these ideas did not come directly from Christopher Alexander, and many had their origins before this time5, there was nevertheless a sharp change in their uptake around this period. This was the 1990s. Back then, adults were witnessing the growth of computing across the world and the rise of the internet. It was the early days of success in technology outside of office usage. These were the days of Nintendo and Sony vying for the top spot in videogames consoles and, at times, innovation was valued over consistency. It could simply be that the information was more readily available. Personally, I think it’s an interesting connection, even if it’s just a coincidence.

1

The manifesto can be found at https://agilemanifesto.org/ and the principles can be found adjacent at https://agilemanifesto.org/principles.html There are a lot of resources available describing Agile, with many mistaking Scrum or SAFe or some other methodology for Agile itself.

2

Kent Beck developed the approach during his time on the Chrysler Comprehensive Compensation System project. Some accounts put the timeline around 1996, but the book Extreme Programming Explained[EPE00] was released in 2000, later updated in 2005[KB05]

3

Ken Schwaber and Jeff Sutherland introduced the main thrust of the methodology at an OOPSLA conference in 1995, but had been using it and refining it for many years before.

4

Usability Engineering[Use93] by Jakob Nielsen brought us many techniques to analyse UX, but it was not generally recognised until much later when Donald Norman popularised it under the name “user experience” as part of his role at Apple.

5

For example, the usage of the term scrum and some of its constituent activities dates back to a 1986 article titled The New New Product Development Game by the authors of the The Knowledge Creating Company[TKCC95].

Complexity

For anyone reading the GoF book[GoF94], the abridged history and reference to the originator of the term ‘design pattern’ might be misleading. The information on Christopher Alexander can be found early on in the book but is quite limited in scope. When I first read this work, I imagined that he was some revered figure from an era before my grandparents were born, but the real story is much stranger and more intriguing.

Christopher Alexander was not some historical figure who lived during the height of the old British Empire. Instead, he was a modern architect working at the time of the rise of the software-design-patterns movement. However, he would have almost certainly refuted the claim that he was a modern architect, as there are many preconceptions associated with the term.

Some individuals, including myself, suggest he was a post-modernist because he understood modern architecture more deeply than most modern architects ever did. This deep understanding led to his disillusionment with the architectural institution as a whole[Grabow83], and simultaneously led to him being praised and revered for a generation as a ground-breaking architect with a vision for a new era of architectural methodologies.

When most people think of modern architecture, they think of glass and steel buildings rising in the sky. They picture cubes and off-angle constructions. They conjure images of brutalist buildings built with cutting-edge mechanical engineering techniques. All these things are modern architecture, but they are not modern methodologies. This realisation by Christopher Alexander precipitated his journey into patterns and beyond. He saw the modern way as stagnant and realised it was stymied by complexity.

GuggenheimMuseumBilbao Guggenheim Museum Bilbao: modern construction, conventional planning.

The mainstream understanding of building architecture was—and still is—that an architect must first produce a design, and only then can it be built. The more elaborate and complicated the design, the better the architect must be in order for the project to succeed. If a large design were commissioned, the project would need an architect who could handle the necessary thinking to produce it. They would need to spend much time in deep thought, imagining a new building from nothing and then drawing out the design. A first draft is always thought to be flawed, and so the process was geared towards assuming this was a truism and allow further iterations. Others would review it and help the architect revise the plan where necessary, but it was always a singular vision.

As any project grows, the number of interconnected pieces multiply. A single human mind cannot consider all of the effects of changing one or two small parts. Because of this, modern architecture often relies on modular parts or last-minute additions to fix impossible stresses. Modern modular construction moves the decision-making and problem-solving processes away from the construction site and back to a paper plan, trivialising the problem into an abstract form. Instead of solving a wiring or glazing issue for a specific house, modern methods use modular pieces to resolve the more general puzzle of glazing or wiring for the average house. There is a problem with this though: I don’t know anyone who is precisely average.

Such forced inattention to local details and deviations inspired Christopher Alexander to write his original work, Notes on the Synthesis of Form[Notes64]. In it, he describes a way to attend to these common problems of complexity using a new way of working he developed to resolve the inevitable complications of large modern projects. It’s not overstating things to say that Christopher Alexander was the first person in centuries to develop a new approach to architecture.

From the start, Alexander’s work revolved around resolving the problem of dealing with complexity in our world. Since we had passed the point where traditional architectural methods—which happily adapted to changing needs at a local scale—could no longer keep up with our new technologies, we could not go back. Furthermore, because modern architectural methods—which could support ever-changing technologies—could not address critical problems of complex environmental constraints, we needed something also surpassing them. Notes on the Synthesis of Form promised to provide a process to reconcile this, but it was only the first step in a grander sequence of discoveries.

Christopher Alexander refined his methods over the course of decades. During the period of the software design-patterns movement, his work on architectural patterns had already stabilised. However, he was then in a phase where he was working to refine even more fundamental properties of forms and processes of change. We rarely see references to this later phase within software development literature, but its absence may provide some insights into why the patterns movement ended up the way it has.

The way Christopher Alexander worked

Christopher Alexander’s processes differed from those of the prevailing builders and architects at the time[TOE75]. Indeed, many aspects of his work were similar to the principles espoused in the Agile Manifesto[AM01]. They were iterative approaches, which demanded taking small steps and reviewing regularly. They employed mock-ups rather than extensive up-front planning and documentation, used prototypes to prove an idea and elicit otherwise missing requirements, and required working with the client directly[NoO3-05].

However, his team also used other processes that were less in keeping with what we consider agile software development to be. They would use numerical methods1 to solve things, performing experiments through simulation. They would use new materials in novel ways2, offering some of the benefits of traditional materials but without their limitations. Additionally, they worked to a limited budget and kept track of what was possible3, rather than revising costs and requesting additional funding. In a critical departure from agile software development principles, they were unable to provide a constantly deliverable product as they did not have multiple working versions from which a preferred solution could be selected. Half done for them was not done at all.

Christopher Alexander built processes that facilitated faster feedback and provided the freedom to do the best thing to get that feedback while balancing it against a budget. Remember, this was the 1970s, long before DevOps4 turned up. Traditional builders had limited time and money for construction, and the same is true of any modern architectural project. In that respect, his process was no different. The difference was his commitment to producing something viable within that budget rather than pushing to have the budget increased. When the team needed to add something, they took something else away—even when this trade was thrust upon them[Battle12].

Christopher Alexander’s developments had the capacity for adjustment built in; nothing relied on everything else being just so before it was usable. However, this called for a different way of working; elements needed to be adaptable. This was why he sought out new materials. For him, it was essential to have the opportunity to make adjustments on site in response to last-minute revelations5. These revelations were the unpredictable issues brought to his attention by the process of construction, not just during, but by the feedback from the act of constructing—things no-one could have anticipated before beginning the work.

Some agile development methodologies suggest a similar approach. Deploying working code to clients early to get feedback is similar to constructing within a budget so that the client gets something, even if everything else goes wrong. Working to an overall budget and realigning as the project progresses draws parallels with the principles of working with adaptable materials and building flexibility into the process. Prototypes and spikes6 are manifestations of the cardboard cut-out approach to getting on-site feedback. Simulations of architecture mirror the making of walking skeletons in code, as both of them prove that the overall structure works as expected.

Innovation

Christopher Alexander and his team were known for inventing new processes to get things done that were realised just in time to meet their needs7. Unlike other architects, he was hands-on during his building projects, often finding new ways to achieve his goals while keeping costs low. The whole team worked like this, constantly taking into consideration all of the available information about the construction site. The availability of labour, tools, and materials guided them to discover new processes within easy reach, often undocumented or unique, but always appropriate.

Many examples exist throughout his printed works illustrating the way in which Alexander’s team developed solutions for specific contexts. In The Nature of Order, Book 3[NoO3-05], there is a collection under the title The search for new materials (starting on page 518) in which he describes many such instances as well as some of his reservations. One complaint relates to renewable materials having lost some of their value as they have been made less adaptable to their environment by the processes of others. Later, he gives credit to the developers of aerated concrete blocks8, which might seem surprising. But for him, the key was that they were adjustable and conformed to the builder’s needs as well as wood or clay might.

Alexander paid attention to the environment right from his first project9. In 1961, while living in Gujarat, he developed a way to build a roof for a new school. Local resources were scarce. There was no wood to speak of and almost no access to transport for bringing materials in; however, pot-shaped guna tiles were readily available. His approach involved using stacks of these tiles, which made it possible to place arches in parallel lines and produce a dome. Supporting material is not free, and the self-supporting nature of the arches overcame that requirement. The outward thrust from the arches would have been a problem, but he found a solution in the nearby cotton fields; the dome was tied using the plentiful supply of tensile steel straps, which were originally intended for tying cotton bales. The whole process was suited to the materials that were available.

GunaTileDome —Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 3[NoO3-05], p. 527.

When I researched his history, I was shocked to learn that Christopher Alexander used all the available materials rather than just traditional ones. I had assumed he would have used more well-proven materials as they had easier-to-understand properties. On the contrary, he developed innovative ways of working with new materials as a regular part of his processes. This use of modern materials in novel ways meant that he could hardly be called a traditionalist.

He developed many new ways to work with concrete, challenging brutalist architecture’s dull and repetitive results[NoO3-05]. He worked with high-pressure water jets[NoO3-05], which were typically used to cut steel, to cut stone for experiments relating to decorative layout. And then there were the revelations about traditional materials, such as the way in which stout wooden beams, though costly up front, would be more environmentally friendly and cost-effective than the smaller stud forms simply due to the many hundreds of years for which the buildings might stand[NoO3-05].

Locally, but remote

Working at the site was a common requirement for Christopher Alexander, but one project10 proved he could even overcome that constraint when necessary. Although his team was not asked to complete the project, he still solved the problem of working remotely. The contract conditions included a requirement to lay all 8000m2 of flooring during a two-month window during the construction of a new concert hall in Athens.

Alexander needed access to a view of the floor and the actual materials, not merely an image of them, so his team rented out a warehouse sufficiently large to house sections of the final product. They cut tiles and placed them on fibreglass mats. When the design finally looked right, viewed as they would appear in their ultimate resting place, they glued them down. The completed mats were also cuttable, meaning they could adapt to any deviations and match unexpected errors in the borders when they arrived in Athens. The process proved that as long as you understand your goals (to view the work as close to reality as possible), a solution, along with a new tool or technique, should present itself.

Developers who stick to a plan and use existing tools, without inventing anything new to solve their problems, are probably not following an agile development process. A software developer should think about the product, how to build it, the tools they use, and how to improve those tools. Uninventive developers are lucky to be able to do a job that does not present novel requirements. But then again, perhaps they’re not so lucky after all.

1

Examples can be found in [NoO3-05] where they use finite element analysis for wooden structures, and [NoO4-04] for a concrete bridge structure. Other examples exist such as the reinforced concrete trusses for the Julian Street Inn.

2

Gunite, or sprayed concrete, is a quick-drying form of concrete shot under pressure at a target surface. It’s normally used to coat the surface of cavities or reinforce an otherwise looser form to give it rigidity. Examples include the hull of swimming pools or creating a skin for a cliff edge to reduce erosion.

3

There is an extensive breakdown of how they developed a budget system in [TOE75]. The aspect of budget is also covered in detail for the West Dean project in [NoO3-05] from page 238 and onward, and also the Eishin Campus project in [Battle12] where there is evidence of how the established methods naturally tend to budget extension and waste (e.g. the concrete lake slab on pg 357).

4

DevOps is a relatively young branch of a manufacturing and product development process of ongoing improvement and introspection, but tuned for software development.

5

Mentioned in [APL77], [NoO3-05], and less directly in other works.

7

See [TPoH85] for examples of new roof and brick design to work well with what was available and the unique requirements of the project. Many other examples can be found in [NoO3-05].

6

A spike is what older people like me call a proof-of-concept.

9

The development of the school roof is documented in The Nature of Order, Book 3[NoO3-05] on pages 526-527.

8

The blocks he refers to are a brand called Hebel. They provide the blocks and the necessary tools to work with the novel material. They can be cut with wood tools and readily adapt to any problem.

10

The project is documented in The Nature of Order, Book 3[NoO3-05] on pages 562-572.

Moving away from documentation and requirements gathering

No plan survives first contact with the enemy.

— Helmuth von Moltke the Elder (paraphrased)

It could have been the motto of the Agile Manifesto[AM01]. It could have also been said about Christopher Alexander’s post Notes on the Synthesis of Form[Notes64] work too. Much stems from developers repeatedly seeing their plans fall foul of poorly estimated costs. Overly complex systems often grew out of simple-seeming documents. Furthermore, customers were regularly unsatisfied with the results, even when presented with precisely what they had asked for. Using their antiquated design processes, modern architects also encountered the same obstacles.

The Agile Manifesto has an explicit preference for working software over comprehensive documentation. The value in not producing a lot of documentation up-front is much the same as Christopher Alexander’s methods, which required on-site presence and situational reviews. Plans have value, but they cannot be falsified by paper reviews. Reviews must happen at the place1 itself. Indeed, the capacity to reject designs in the early stages is the reason why mock-ups were so crucial to Alexander’s process. The preference for working software is a preference for something that can be inspected, understood, and reviewed in situ. Immediate, visceral feedback is much more potent than documentation when determining the next step.

The manifesto was also making a statement by asserting that the documents typically produced during development had no inherent value. Only documentation for an extant product was valuable to the end user.

A further problem with documents typical of the time was denial of the mastery, purpose, and autonomy of the programmer. Plans were orders—something to follow. The only choice was whether to do as instructed or remove yourself from the project. This echoed Alexander’s thoughts on master plans for site development.

[T]he existence of a master plan alienates the users … After all, the very existence of a master plan means, by definition, that the members of the community can have little impact on the future shape of their community, because most of the important decisions have already been made. In a sense, under a master plan people are living with a frozen future, able to affect only relatively trivial details. When people lose the sense of responsibility for the environment they live in, and realise that they are merely cogs in someone else’s machine, how can they feel any sense of identification with the community, or any sense of purpose there?

— Christopher Alexander, The Oregon Experiment[TOE75], p. 23-24.

However, as is often the case with an immature collective, things went too far. Documentation is for more than just the end user. It provides fertile ground for insights and elicits unexpected requirements. It offers a way to document how you arrived at your decisions and what informed them. We must also recognise that some specific forms of documentation are mandatory. Indeed, some paperwork is used to verify that we have achieved our expected outcomes and reached an arbitrary payment gate, while others may outline contractual obligations to security or safety.

People overlook the powerful effect of writing on understanding a problem. Writing it out often helps you to notice gaps in your knowledge or reveals contradictory beliefs. I studied design patterns to write this book, but the writing itself has also been an educational process.

These days, end-user documentation—the only documentation implicitly allowed—is considered an indicator of poor design as the UX design should make the application learnable without it. User manuals embody marginal value to the developer; a library should be well commented and easy to grasp, avoiding the need to refer to separate documentation.

The Agile Manifesto’s signatories showed no fondness for the documentation typically produced as a byproduct of the development process. Presumably, this was because it was not a product in itself. They appeared to support eradicating documents filled with gathered requirements and technical designs produced solely to be followed. There is sense to this. Planning the whole development up-front is a bad idea, but only because it’s impossible to foresee the future. Consequently, the plan will be flawed; preparing everything at the beginning is only bad if you force yourself to strictly follow the obviously and inevitably wrong plan.

So, why plan at all? Well, because planning is faster than simply doing. Planning what you’ll cook for dinner for a week can simplify the shopping and the cooking. Sure, things can change, but at least you have an overall idea of what you have to work with. When you have an idea of what to work with, you can balance the overall effort and cost of the operation. And this is what Christopher Alexander did. His process included a lot of up-front planning. They budgeted for parts and the selected patterns to use in the overall construction. His work in Notes on the Synthesis of Form[Notes64] is all about extensive up-front plans, but they are plans for helping reduce mistakes, not plans that had to be followed blindly. This is how we arrive at the thoughts behind the often quoted “Plans are worthless, but planning is essential.” The point of a plan is to limit the required effort to the minimum, not to stop thinking ahead entirely.

A preference for working software over comprehensive documentation has been interpreted as advocating for the removal of all requirements-gathering steps. However, this leads to software development without an initial phase to gather tasks, to figure out the complications, and to reduce risks by ensuring bases are covered. Does up-front requirements gathering decrease risk in practice, though?

Preparation versus risk relates to the theory of quantity over quality. How practice, deliberate or otherwise, makes you better at that activity. It allows for mastery and gives you new perspectives for making better decisions. It also explains how evolution wins every game ever played. Producing an order of magnitude more software products to show to the customer to get feedback rather than spending days, weeks, or months studying the customer’s requirements sounds very similar the tale of clay pots found in the book Art and Fear[ArtFear94]. People often repeat it as an example of how quantity beats quality with regard to the ability to create the best possible product in the end.

The aforementioned tale goes like this: the teacher announced the class would be split into two groups. They would grade the quantity group solely on the weight of their work. However, they would use the traditional grading process for the quality group by basing it on the quality of a single pot. The experiment had the most curious result. The highest quality works were all produced by the quantity group. The moral of the tale is thus: To deliver the best possible output, practical experience and many iterations trumps time spent in preparation and deep study.

I would like now to relay a personal story in keeping with this tale. In college, I studied Music Technology, a course that explored the technological foundations of music and other media in the modern age. It included many aspects of music, from royalties and copyright law to physically constructing a studio. Other workshops were more musical, and among them was a series of units on the composition and production of musical tracks. Music production was the reason I had taken the course in the first place and was the joining together of many disciplines. I wanted to make music using better tools and to learn better composition techniques and songwriting skills.

My music-making capabilities gradually improved over the time I spent on these units. I took each track one at a time, possibly releasing a single worthwhile piece of music each term. That’s three months per piece. I saw how I was improving with each new track and was glad I was progressing. However, we had to decide what our final project for the course would be. I am a game developer at heart and have always wanted to make them, so I committed to a project consisting of writing all the music for a fictional computer game.

This project was a tremendous change of pace for me as, without fully realising it, I had signed up to produce more tracks for this single project than I had produced in the whole course up to that point. My portfolio of songs was somewhere in the region of ten to thirteen at the time; I cannot remember precisely. Nevertheless, I decided to produce a track for each level of the fictional game, with themed music for each act and a general motif for the whole suite. I also followed some assumed guidelines for game music, such as no chorus or refrains, just a steady mood, and reduced dynamics meaning a player could set a volume level and never have to listen to silence for very long or worry about the music being louder than the sound effects they were listening out for.

With these limitations in place and the sudden need for more than twenty tracks, which all had to be composed before the project’s due date, I started work. As I worked, I decided which instruments were core to the pieces and which were track-specific. I built up a set of practices for constructing tracks and learned how the filters and effects I used would sound even before I applied them. Overall, the time it took to create each song grew shorter as the project advanced to completion. When it came to the final piece, it took me no more than an hour from first note to final mix.

This is the part where my tale reflects the story about the clay pots. Whereas in the ceramics workshop, the teacher graded the students by weight, I was due to be graded on production quality and my understanding and implementation of the techniques we had been taught in class. I admit that I had been a very poor composer when I started the course; over time, I had improved to the point where I was merely third-rate. However, after the game-music project, the teacher who graded me said the last few tracks were the best he had heard me create. In effect, the more pieces I had completed and the more in the zone I had become, the better the individual compositions were.

More consequential for me was how, after the project, my new production quality stuck. Today, I am a wretched composer due to a lack of practice, but the sudden improvement at the time meant any music I produced after the project was elevated to a new level. For this reason, I would argue that the story of clay pots is incorrect, to some extent. Some people interpret the result of the experiment as being related to agile methodologies, such as Scrum, when they’re not.

A Scrum-driven project does not aim to build many individual products, producing one great product in passing, almost by accident. It’s still a process aiming to produce one final viable product. It’s an iterative development process. Therefore, it sits in the camp of the primary group—the group graded on the quality of a single pot. Ultimately, my evaluation was based on the quality of my final suite of music, not on how many tracks I had produced. Therefore, what we should take away from the clay-pots tale should be that it’s not the pot that gets better, but the potter.

This is an important distinction because you only get one chance to make the final product in some projects. Perhaps it’s difficult or impossible to build a larger final product from a smaller one. Iterative development might not be an option. In such situations, if you don’t look forward towards the final product and what it should be, it can lead to nothing at all. When engaging in these projects, there are often insurmountable problems that are created in ignorance during early development. For instance, consider the cost and complexity of fixing security issues after your software’s first alpha or beta release. So what the clay-pots experiment tells us is that we can be better during these projects if we understand that practice and preparation are two things which can sometimes be one thing.

In conclusion, agile development can be forward-thinking and include up-front investments, just as Christopher Alexander’s team selected design patterns to structure their projects. It can be about learning deeply enough to remove the need to look forward. Agile principles will prioritise faster development to facilitate swifter learning as well as building up good tools to make future work more manageable, just as Alexander developed new tools and materials to complete his constructions. You shouldn’t expect to fashion a great composition on your first attempt if you don’t study, but if you intersperse your studies with a hundred creative acts, your last one will be better than if you had spent the whole time with your nose in a book.

Agile approaches are suitable for training your team to get things right the first time every time you task them with a familiar project. However, this form of development can be wasteful when attempting exotic projects. This is why we need to use models and prototypes, as there will be major mistakes and lots of technical debt. Christopher Alexander strove to use flexible materials to avoid the costs of these unknowns—materials where errors could be undone or avoided, even as they emerged.

Every process depends on the wisdom of the team to instinctively know the right thing to do. Agile methods allow them to do that in the same way as the distribution of knowledge via a pattern language. It also allows them to become better at their craft through accelerated experience. If what you are building can be built up iteratively, then all the better, and in software, it usually can. Do not fear throwing away the bad work and early attempts. In fact, you should be fearful of not throwing code away.

Rebuilding whole modules from scratch will become quicker as the team grows better at making them through practice. What you cannot do safely is rewrite a module you didn’t write yourself. Also, you should rewrite early, not late, as your wisdom will have already begun to fade.

This kind of evidence might not be enough for you. You want to know why. Why is planning up-front not as effective?

1

Much like a gemba walk is about literally walking in the place where the work happens, a review of a plan must touch the reality of the problem space.

The right wrong thing

User stories should tell you the right thing to make, while requirements analysis should tell you how to make the thing right. However, the latter process gets some things right and other things wrong. It can reveal literal and obvious things, such as what platform you need your software to run on, how much memory it has, what can be considered an acceptable response time, or how many concurrent users you expect to reach. These numbers can, of course, be wrong. An agile development practice almost expects such answers to change, even if they are correct at the time, yet these facts are knowable at the time. They can be detected, measured, and calculated, even if they are wrong at the end of the project.

Other things are entirely unknowable such as unexpected market shifts or secret projects that are suddenly and publicly announced. No requirements analysis can gather the unforeseeable. Agile methods can prepare you to handle these, but they can’t predict them any more accurately than traditional requirements-gathering exercises. And if they don’t happen, why do we still think that agile approaches are superior?

What’s potentially knowable but often missed are the requirements we unearth when project components come online; the results of interactions between parts previously developed in isolation. These are the emergent properties and complexities. A requirements-gathering phase can occasionally pick up on a few of these if seasoned developers are present, but even if you have experienced experts onboard, you should not hope to discover more than a slim majority of them during any planning phase. For this reason, projects should have a pre-production phase. A more fruitful requirements-gathering process can occur during a prototyping or proof-of-concept development stage.

Turning to the making phase faster has two benefits. First, it creates people with wisdom. In the future, during preliminary requirements gathering on a different project, they will know what requirements there will be for the things they typically work on.

Second, this change allows you to rapidly reach the stage in development where the requirements you missed during requirements-gathering make themselves known. They appear as integration blockers or arise when completing detailed design work, occasionally manifesting as insurmountable bugs at the lower levels.

Code enforces its requirements better than documentation because conflicts in documentation need to be actively sought out. In code, conflicts rapidly become compilation or implementation impediments. There’s nothing quite as immediate and obvious as being in the middle of developing a feature and realising it’s not possible to implement it with the current API or data layout.

Christopher Alexander’s process mirrors both these benefits. Planning at the site resembles eliciting requirements by working on prototypes, while continuous integration parallels the production and reviewing of building mock-ups. The builders using Alexander’s processes were given fast feedback on what didn’t work, and became better architects at a faster rate because of it. Again, this leads to wiser builders and the swifter discovery of and recovery from emergent problems.

This process scales with experience. As the critical elements of the project increase in size, the depth of knowledge and the breadth of expertise needed by the development team grow in proportion. For large projects, the developers need to be experienced at building at least medium-sized projects. The larger the project, the more the work must be repetitive for the whole project to be completed successfully. This repetition is not like a factory line but more akin to learning how to mortice a door or frame a window; repetition with variation leads to mastery. Working as part of a team and adapting what you have learned to support the project provides purpose. Meanwhile, understanding and improving the process at the site as a trusted team member gives a sense of autonomy.

Learning is essential, and this is why teams need to stick together. It’s also why we need teams in the first place. No one can do all jobs well. Some members have to carry the knowledge for specialist areas to save time for the group. If everyone has to learn every part of a production chain, then no one knows anything intimately enough to make insightful improvements.

No process is magic; not even Christopher Alexander could invent a system by which a complete novice could build a town from nothing. But he did produce a process by which a beginner could decide to forge a village, and from that process would emerge a community of buildings and an architect.

Furthermore, writing code also has other benefits. Counterintuitively, code is simpler to change than specification. It’s not easier to alter, but it is more straightforward to be sure your work is done. Code takes time to perfect, incorporating many keystrokes and moments of grumbling over failed compilations and red test runs. But even though it takes longer to get right, it is simpler to change because when every test is green, you know you have finished. It gives you immediate feedback on repercussions and adds value as part of the change. It’s also in revision control, so there’s a record of the change. This which helps decode whether any newly introduced usage patterns were part of a vital feature or an unexpected behaviour added by accident.

Many of these aspects are not mirrored in the physical building site, but moving mock-ups around lets you see the impact of alternatives quickly and supplies immediate feedback on the repercussions. Feedback comes much faster than when drawing with a pencil on paper. Unfortunately, these physical processes of making changes do not commonly produce historical records, so decisions and reasoning of physical construction can easily be mislaid.

Problems with acting early

The problems with the action-first approach are mirrored in physical development too. When requirements change on a grand scale, such as building regulations for physical construction or hardware availability for software construction, much of the completed work is wasted and more must be done to move the project back to a new starting point. The source of requirements can vanish, such as when a construction no longer needs parking or a piece of software no longer needs a feature because the operating system now takes care of that aspect. Of course, there are also new requirements, conspicuous in hindsight, leading to regrets that there should have been a little more forethought. Then there are the horrible kinds of change whereby the requirements were gathered, but misinterpreted. The literal requirements remain unchanged, but the solution must accommodate the new interpretation.

An up-front analysis could have revealed some of these potential sources of outside interference, but performing it later leaves these risks unexplored until you can better discern their importance. Consequently, neglecting any up-front requirements analysis is a foolish way to develop anything, yet doing everything at the start is equally daft. The point must always be to reconsider your motivations. What is the state of the world, and what do you currently know about the problem? Each step is a place to stop and assess where you are now and where you need to go next.

Sequences, sprints, and smaller steps

Processes inspired by Agile principles are better than the methods we were using before, but only in the same sense that walking through puddles is better when wearing waterproof boots. The question remains: why are we walking through puddles at all?

Before Agile principles, the favoured production practice was to have big plans and continually refine them until they were ready for use in product development. Lots of documentation would be in constant flux, and no one could read it fast enough to stay up to date. Some chose to ignore the rules and work how they wished. They developed first and documented second, if at all.

Agile development arrived as a solution emphasising direction over destination. We needed this because we see better by noting connections than by seeking out the endpoint. You often know the right direction before you know precisely where you’re going, just as you seek the door and not the room. Moreover, you predict the best course of action before seeing the result, and you should know the need before the implementation.

Christopher Alexander’s building processes included this direction-over-destination principle. His developments were carried out by applying local, stage-appropriate changes. His team regularly held reviews with the client before committing to anything they would pay for. These reviews included everything from the overall design to the colour of the details on the tiles. The significant part here is not the customer or the feedback but the regularity of the review. Rather than following instructions, they were building according to a plan by adjusting its details in response to the data resulting from the ongoing process.

In Scrum, the iteration cycle invites people to set time limits or reduce the scope of their tasks. This ensures that everyone takes the time to look up and see where they are. Scrum requires the team to meet with those who can give feedback on their direction. Then, they can appraise the situation and decide if they need to course correct. They can also inspect long-running tasks and check whether they need a change of plan. Do they need to change? Should they stop? Or can more resources be assigned so that they can be completed quicker? In effect, at any review meeting, the team receives feedback on the observable status of the project and can decide how to prioritise the remaining work, whether this leads to a change or carrying on as before. Once these decisions have been made, they decide on the next meetup date.

As children, many of us played a game of find-the-object, in which we were told whether we were getting hotter or colder. The game was simple. If the hider saw us getting closer to the target, they would say the word ‘hotter’, and if we moved further away, they would call out ‘colder’. This is feedback. Satisfying the customer through regular delivery of working software is the first principle of the Agile Manifesto. Satisfying means more than delivering; it requires we receive feedback on what impact it had. Some organisations need to remember this critical step. Either that, or they prefer their developers to play find-the-object while wearing earplugs.

The iteration cycles of Scrum usually assume multiple days between meetups, which can create a disconnect between the stakeholder and the team working on the project. Scrum attempts to mitigate this problem by introducing the role of product-owner. They stand in for the stakeholders, and are available to answer questions at any given time. It’s not the same as having direct access to the stakeholders, but it has other benefits. Stakeholders can avoid dealing with the poor communication skills of the development team. Having a charismatic product owner is not deceitful. In fact, it only becomes a problem when they don’t understand the stakeholder’s values.

You must act on feedback. You have to change what you do and how you do it. The order in which you make decisions is also critical, which is reminiscent of Christopher Alexander’s generative sequences1. Generative sequences is the name given to sequences of steps or refinements that produce healthy structures. A healthy structure is well-formed according to the stresses, strains, and forces surrounding it. Feedback must be timely, and a good sequence provides helpful feedback at the right time so that it can be acted upon efficiently. Consider how some building block construction toys, such as Legotm, have excellent instructions while others seem inferior. The main difference is not the printing quality but the concern with which the instructions treat the mental model of the builder. The builder must have guidance for the right thing at the right time, and each step has a context. Good design patterns and good instructions are generative sequences. Moreover, good sequences generate a form while not demanding stilted, robot-like following.

So, things created through a process of sequential decision-making steps are of a certain quality, not only due to the steps chosen but also the order in which they are taken. One sequence will have a better outcome than another sequence of the same actions. As a code-related example, consider what might happen if you build all the features first and then fix all the bugs at the end.

Some orderings will only be able to produce a disappointing and inflexible final design. An example would be developing software without considering security or performance and then trying to introduce those aspects afterwards. Other arrangements of the steps will require minimal back-tracking during the final stages.

If the order of steps affects the final design, all agile approaches help because very few lock you into a specific order of activities. None of the elements of the Agile Manifesto prescribes what to do, and few specify when. The essential point is to be aware you don’t know the proper sequence and must actively seek it out.

1

I first came across them in The Nature of Order books (specifically Book 2[NoO2-02] Part 2 Living Processes. In his earlier publications, he introduced the concept under different names. For example, in A New Theory of Urban Design[Urban87], the process is referred to as growing a whole.

Solving the customer’s problems

A strange development occurred in architecture, whereby the involvement of the client diminished as the cost grew. You might think that someone who spends a lot of money on a project would expect to have a great deal of sway on the decisions made, but they often tend to lean towards the opposite outcome. As the cost increases, the time the architect spends consulting with the client diminishes. This reduction of influence is now expected, to the extent that Christopher Alexander was told by his clients[NoO3-05] that they were thrilled to be as involved as they were in developing their own homes. They expected his team to present them with some beautiful final form rather than taking considerable time to gauge their constraints and elicit from them all their deepest hopes and desires for their investment.

In some ways, agile development attempts to parallel this building process. There is a preference for taking a route where you keep the client in the loop, and returning to them with prototypes and mock-ups to get feedback aligns with Agile principles. However, all of the agile methodologies ought to incorporate the practical, concrete practices of Christopher Alexander when extracting those elusive, profoundly human needs. I’m not talking about the problems caused by emergent properties—those are the constraints of the built product. No, I am referring to the gap between what the customer is willing and able to ask for and what they actually want and need1. Even knowing the gap exists would be an improvement.

In the realm of architecture and building, the patterns found were all of a sort that produced environments for the end user. They cared less for the builder and very little for the ego of the owner of the completed buildings. If we consider a building contractor, a developer who funds and then rents out the building, and a potential family who will live in it for a generation or two, we can easily see the differences between them. In effect, the patterns were all about how the building was to be used in line with the actual lived experiences of the family members, not so much the building process. The affordances to the contractor who built the house tended to be those that were also good for future maintenance or for when the family wished to extend their home. In almost no instances were the patterns supportive of landlords or land developers.

As I studied design patterns, I saw an emergent property of these patterns for physical buildings, not just in relation to homes, but with regard to workshops and malls, hospitals and schools too. The patterns were almost always related to two or more of the following states in which people living in and around the building would spend most of their time.

  • Rest and relaxation or recovery.
  • Sustenance and personal maintenance
  • Commuting and transportation.
  • Communication and community.
  • Labour and the care of others.
  • Personal opportunity and spirituality.
  • Family growth and adaptation.

These aspects of life required a different approach to building than an off-the-shelf product or modular building technique. They necessitated empathy for the specific needs of the end user. The builder needed to keep in mind the life to be lived by the inhabitants of the space once the job was complete. They needed to probe beyond surface or fleeting preferences and opinions and provide a structured but inviting approach to elicit the deepest, most enduring, and often quite trivial-sounding but fundamental needs.

Equally, an agile software developer needs to have empathy for the client. They must have a clear image of the final goal and should concentrate on how it provides for the user rather than the image the user thought of when they first commissioned the project. The user is an expert in their domain, but they are not an architect or a builder. They do not know how to ask for what they want when surrounded by examples of those requests ordinarily being considered irrelevant, insubstantial, or simply not serious.

In building and software development, the customer or client is, or at least should be, at the centre of the project. Their problems are the only essential problems to be solved. Their needs are the only needs that can be fulfilled and provide value.

Given these principles, the developer’s profit is contingent on correctly pricing effort. Profit can be increased by cutting costs or inflating the client’s investment, yet those actions would subvert the client’s needs or prove the process was not already efficient.

Gaining a solid reputation and taking that to the bank with a constant flow of work was more important when people grew their homes and extended their workplaces. Building was a sustainable practice, not a growth industry. Construction jobs were shorter, and people could gauge your work and hire you again. This is no longer true of physical builders as evidenced by the many attempts to certify trades-folk, so reputation can once again have an impact.

Unfortunately, software development started out in this position. Reputation was nonexistent for software development houses; there was no history to rely on. Those who commissioned software development usually hired in developers as there was such a small selection of large-scale project development organisations.

Nevertheless, we are now heading towards a world where reputation matters because scarcity is no longer a problem for a customer of small apps. The number of software houses that create small to medium-sized applications has grown tremendously in the last 30 years. In this new world where customers will say no, you must be recognised as someone who will solve their real problems and not introduce more as part of your process. We’re not there yet, but the frequency and size of projects developed on a for-hire basis is growing; at the same time, the number of reviews about developers is also rising.

Solving the client’s real problems requires an understanding of how to develop software and how to elicit the client’s deepest requirements. Only when you can find out what the client needs, despite their inability to ask, will you truly develop software that fulfils a need. Only when you have created a car for a client who wanted a faster horse will you know you have developed the capacity to satisfy those hidden requirements.

1

Some will now think of the quote, commonly misattributed to Henry Ford, about not asking his customers what they wanted as they would have only asked for a faster horse.

Slowly revealing the solution to a complex problem

Solving the puzzle of how to make something is one challenge, but knowing what to make is another one altogether. A customer could be unaware of what they need or be unable to explain it. The form of what you need to build might not even exist. In these cases, your development process could benefit from mimicking Christopher Alexander’s methods.

Many complex products are like this. Neither the user nor the customer will know what they want but they will know what they don’t like. This kind of negative-only feedback is effective, yet it can create expensive problems for those who are unprepared to work this way. Alexander always worked with a customer, resorting to keeping one in mind if they were absent similarly to how developers use personas to guide their work. Instead of guessing what they wanted, his processes were more keenly tuned to provide the things that would give them a more wholesome life. His team would take a deeper look at the patterns of the lives of those who were due to inhabit the buildings and sites once the development had ended. It’s these patterns of life that he observed most closely[Notes64]. The actions taken, the reasons why, and the values behind them all played a part in these analyses. They allowed for plans which not only satisfied the customers but also delighted them[NoO3-05].

Complexity is a word with a loose meaning in common parlance; however, there is a specific meaning of the term that Christopher Alexander managed to tame. It is therefore worth explaining what the word should mean to anyone interested in his solution. We may have some idea of the term when we consider complex projects; for example, they might include formidable problems or things made of many parts. However, I want to more concretely define the term and bind it to one particular way of thinking before using the word as much as I will in later chapters.

I encountered a better way to judge complexity when watching a talk by Rich Hickey entitled ‘Simple Made Easy’1. Everyone can benefit from watching the lecture, as the content is broadly applicable, even if you don’t know anything about Clojure. (I certainly don’t.)

Complexity and simplicity are not concerned with how difficult or large a system is but how independent the inputs are from each other. A system with a thousand inputs and a thousand outputs, all wired one-to-one, is simple. However, a system with only two or three inputs, with every output depending on every input, becomes complex. One example is a throttle and clutch-driven car in which the movement force is a product of both the engine speed and the clutch commanded by a dynamic balancing act. There are only two inputs and one output, but anyone who has driven a clutched drivetrain vehicle can attest to the fact that it takes some practice to learn the complex interaction of those two inputs.

In software projects, the metric for complexity can be the number of things that need to change in response to any change you intend to make. You can measure at the code level and count the variables driving an output; those coupled together like a throttle and clutch.

Complexity can arise from technological choices. Lazy evaluation can have complex performance implications, and the use of smart pointers can entangle object lifetimes. Concurrency primitives for locking shared resources can deadlock at the most unexpected of times.

All of these have something in common—the elements alone are not a problem, in and of themselves, but challenging situations emerge from their relationships.

Now we have defined complexity, we need to specify how we will use the word ‘complicated’. Something with many parts can be complicated. A complicated sequence or set of related things must be connected in a specific order. The elements or steps only interact in so much as they depend on each other in a strict ordering in time or space. When you change one part of a complicated system, you have a small surface area for change propagation. There is a linear chain of effects. The effect of a change does not ripple back around to the element instigating the change, and the changes in the ripple are simple but frequently tedious (see protocols, build sequences, taxes and expenses). They are all complicated processes, but they are often predictable systems. They are knowable. It’s possible to accurately estimate the total impact of your input before you have seen the output.

Given this definition of complexity, we can now pose some questions. First, how does one tame complexity in a project? Second, what did Christopher Alexander do to rein in the complexity inherent in a building project? Finally, do these measures align with software development and Agile principles?

Regarding the first question, is it even possible to tame complexity? The answer is yes to some extent, and one way is documented in Notes on the Synthesis of Form[Notes64]. Alexander’s approach to building was an ever-evolving version of this first work. It included up-front requirements gathering and a discovery process for deriving final forms by concentrating on how things were coupled, which involved explicitly shying away from grouping them by the categories with which we might naturally associate them.

This certainly does not sound like the agile development practices we know. It also does not appear to address complexity. But let’s review for a moment. Christopher Alexander tamed complexity by understanding what it was. It was the coupling of changes. So, rather than creating a hierarchical architectural plan where each part finds a place in a group with others based on our prejudices, he grouped them by the strength of their change coupling alone. He would continually ask questions like, “If I change this part of the design, will that other part have to change too? And if so, does it improve with it, or does it make things worse for it?”

The Notes on the Synthesis of Form approach to architecture includes the following steps:

  • Gathering all the elements that must be considered as part of the design
  • Revealing connections based on their complexity—how they affect one another
  • Finding the weakest-linked large groups, splitting them into separate conceptual groups, and finding a way to name or make a diagram of them.
  • Within each group, repeating the search for the weakest connections and splitting them again.
  • Continuing the splitting process like this until all solutions for all subsets of elements seem trivial.

The above process was recursive. Large cutting lines were formulated first, and then those groups were cut up in the same fashion. In this way, his early work revealed connected pieces of our environment. As he worked on more projects, he found some connections occurring repeatedly. Consistently naming these recurring situations led to the discovery of patterns. With patterns, he could resolve parts of a complex system without the burden of the concrete problem biasing the solution. He could now avoid the XY problem, being wise to the deeper need hidden beneath a shallow problem definition.

Many recurring collections of coupled elements tended to distil down to only a few happy, balanced solutions. However, these patterns were not actually solutions; they were the common properties of the organisations of elements solving those problems. The patterns were the pairing of the problems to be solved and what would be true for any good quality solution.

In a complex project, if you reduce the number of connections between elements or make the relationships visible, adjustments and estimates become easier to predict and contain. Changes in a project based on preconceived conceptual groupings cause cascading effects and induce further stresses throughout the system.

Adjusting the style of a window frame can affect whether a window can provide enough light to make it worth installing in the first place. For example, with the advent of UPVC windows, given the requirement for approximately 10cm of border around the glazing of an openable window, it is hardly worth the effort to build window spaces less than 40cm wide.

Furthermore, a style change can affect the number of bricks needed or whether a wall needs a reinforced steel lintel. Physical limits frequently dictate whether a design is valid or worthwhile. The availability of materials should too, but so often you see examples of people shipping halfway across the world just to have precisely what they ordered. In a sense, this precision and modularity based approach could be considered environmentally unfriendly.

With code, we see similar patterns of complication. You might include a global or a Singleton to allow access to a unique feature of your platform, but then later, a revelation requires you to offer two or more of these (e.g. after-the-fact testing suddenly being necessary as part of a security or safety audit). Now, everything which accessed resources through a Singleton or global will need to find a new way to pass a context so it can act and react according to the test bench setup.

Using reality to reveal coupling

Christopher Alexander’s solution to discovering how elements interact was to be at the site. He used many mock-ups with bricks placed dry, cardboard constructs and painted paper strapped to wooden frames, as well as anything else his team could do to trial an idea before it had to be, quite literally, set in concrete. This is like prototyping for software developers. Proofs of concept take us from untested hopes to feasible options or moments of clarity and despair.

Christopher Alexander used mock-ups to discover what you can’t extrapolate from paper designs, including how objects inside and out obscure or bounce light. The choice of an angle and colour combination may affect your mood but go unnoticed on a paper plan. How a view looks through a window may cause people to linger there, and so we see more passing space is required to navigate around those captivated by the vista.

We engage in a similar behaviour when doing exploratory programming, trying out ideas in the source rather than relying on paper designs or mental modelling. We make quick changes to prove that an idea might work in practice. Then we roll it all back and start properly. Using a spike like this shows us how the development will progress if we commit to it. We can identify how it might feel to build upon it later or even how it might be for the end user via a UX mock-up.

The most pertinent aspect is that the speculative activity should be done in the place itself, the actual site where the final work will eventually be committed. This means that you get to see how it impacts the final form, rather than just guessing at the impact. Unlike how when you work away from the main area and integrate at the last minute you learn of the problems quite late. The value of this activity is that you are always building towards gaining hindsight; effectively, you are already wiser before you lay the first stone.

In Domain Driven Design[DDD04], Eric Evans claims that domain models undergo many iterations and name changes before finding a ‘deep model’. As you internalise the core structure of the problems you face, you see nuances in the description of the problem. Those moments of recognition are a form of acquired discernment. When you can perceive these new differences, you also recognise the need for a different reaction to the phenomena. A different interpretation of the world begets a different response to it, and before you can discern a difference, there is no way you can incorporate it in your design. We cannot rely on hindsight. We must depend on many iterations, as it is the only way to see what should have been obvious all along.

We often only see the shape of the solution as we draw very close to the end of a project. The form of the problem is similarly revealed quite late. This is not a coincidence. We would not need to iterate if we fully understood the problem we were trying to solve or all of the problems inherent in any solution—if it were, software development would simply be a matter of data entry.

Development as a puzzle

You can compare this to puzzles. When you first encounter a puzzle, you are given it in literal form. Whether it be a word puzzle, a wooden or metal toy, a puzzle cube or a sudoku, when you first encounter it, you only have a simple concept of what the outcome should resemble. This is much like your customer’s goal. You have yet to learn how to get there but can visualise the final state.

The more you poke around at the puzzle, the more you learn about it. You uncover the sub-problems and generate local solutions. You’re no longer solving the original problem; you’re solving the steps towards it and generating new words for states you see and actions you can take. This is very much like programming, which is why people often say you need to plan again once you know more. They mean that you need to replan once you understand more about the problem and its sub-problems and have metrics for the value of the sub-problems’ solutions.

We work through puzzles by learning about the contexts around them and finding features of the problem we can fully understand. We can solve the complete puzzle only when we grok the smaller parts. Sometimes, we throw a lot of apparent progress away, which can be disheartening for some people; nevertheless, deleting code is not throwing away progress. Progress is accumulated in the knowledge of the people solving the problem and should therefore be measured in how fully a problem is understood. So you should regularly review the problem domain thoroughly and as early as possible.

You need to highlight all the sticking points and sub-problems and find a way to use those vantage points to see what you need to see. Sometimes, you will only know about a sub-problem once you have solved a different one. You will have to deal with many unknown unknowns in your puzzle or project, which is why collecting requirements must also be iterative. In Domain Driven Design[DDD04], Eric Evans suggests you should go back to the domain experts multiple times while developing a solution, not because they have new information for you, but rather because you can finally receive the information overlooked during previous interviews. It is only intelligible to your senses after you have built the requisite mental scaffolding to support the nuances of their statements.

This agile approach to requirements means a requirement can suddenly appear. It may have been misinterpreted, misunderstood, or ignored the first or second time. Later, due to an awareness of interconnected details and revelations from newly acquired discernment abilities within the mind of the software developer, it can suddenly appear to be a critical element within the grander scheme of things. When we listen and write things down, we only write down what we can understand. We cannot capture requirements that go unmentioned because they are too obvious to the person relaying them. We are also oblivious to what we fail to write down because we take those elements for granted.

To detect these unknown unknowns, Christopher Alexander worked within the domain whenever possible. Meanwhile, agile developers try capturing them by asking additional questions about each area. They can’t look out of a cardboard window, but they can ask whether a report is essential and what the business impact would be if it was delayed or otherwise unavailable. Sometimes, a tiny thing can have a huge impact. For example, someone might say, ‘… and then we email the report to three people’. You could interpret this as them needing the ability to email a report, but when you dig further, you find out that it is a release requirement. For example, the report is evidence required during a safety audit and must be archived and delivered with the build, but the other two emails are for interested managers or just because the person in charge of CI wanted to set up a liveness checker based on it. You must ask, “Why is that important to you?” to get these nuanced answers, which subsequently lead to entirely different requirements.

Given that revelation is part of any worthy process, it’s clear that the classic waterfall model of software development is demonstrably wrong. At least, the presentation of its process seems that way. We want to avoid that amount of up-front, context-free architecture, and integrating mock-ups and prototypes into the process will lead to some of the best and highest-impact design revelations.

However, many agile processes are unfavourable for different reasons, as they exhibit urgency problems. Urgent, essential tasks should be undertaken immediately. Important, non-urgent tasks should override urgent non-important tasks, but agile processes often prioritise work items the other way around. They select the most urgent thing at each stage until they blow up in larger projects.

Some will argue that this is not true as you should realise an important task is urgent as it’s not being addressed, but this implies that you have mature members on your team who are able to signal the problem ahead of the upcoming critical moment and filter out tasks that are not important. So, most agile practices lead to good outcomes, but this is only because they reintroduce features that were removed in the unwavering pursuit of ultimate efficiency.

Christopher Alexander’s approach to urgency was different. For him, decisions must be made and a problem fully understood before any irreversible actions were taken. Urgent features lead to discovery and help in making decisions. They are always about ensuring you make the right decision before it matters. As his process matured, the number of urgent tasks declined as the design patterns process for architecture led to decisions being made as part of a good generative sequence and before any money could be spent on the wrong things.

The difference here is that agility, as it is practised, does not separate the design and problem-resolving stage from the building phase. To some extent, that is the nature of software development. Proofs of concept tend to become final products, and prototypes continue to gain more features until they become shipped software. As the mock-ups of software are interactive, there’s a tendency to think it’s possible to polish them up into the final product. Unlike the cardboard and paper mock-ups of Christopher Alexander, from the outside, proof-of-concept code looks the same as the final form.

If there were no way to realistically ship prototypes—i.e., if there were such a thing as cardboard code—then the processes we put in place in the name of Agile would not be as severely misused. If we could use agile development practices to become better prepared, we could slowly reveal the solution to the complex problem and only pour the concrete when we believe we are ready. I have experienced a reasonable amount of success by actively choosing the wrong language to develop code in, purely so I can be sure it remains a prototype. If you know you need performance, write it in Python. If you know it needs to run everywhere eventually, write the code in Swift, F#, or some other single or limited platform language. This is a strategy I have employed to avoid needing to be good at politics and still escape the technical problems caused by trying to build a final product on the frame of a prototype.

1

A link to one of the versions of the talk can be found here: https://www.infoq.com/presentations/Simple-Made-Easy/

Just enough and no more

Christopher Alexander ran his projects with a fiscal policy paralleling that of agile developments. He developed a method to drive the process towards a more natural sequence of growth from local decision-making. Traditionally, building projects went something like this:

  1. They are budgeted
  2. A plot is found
  3. An architect is appointed and draws up preliminary designs
  4. Planning permissions and other approvals are set in progress
  5. A building contractor is brought in
  6. The physical work of the construction takes place

With the stricter budget but flexible spending structure introduced by Alexander, customers would often get much more of what they needed and far less of what was unimportant to them. This echoes the principle of preferring working software over documentation and customer collaboration over contracts.

When you have a multi-stage process, you find budgets become a playground for corruption and profiteering.

At each step, the goal of the individual developer is not to build a delightful building for as little as possible but to make as much money from the building process as they can without breaching their contract. Essentially, cost-cutting practices increase costs.

  • Engineers are rewarded for spending more time than required on a project when it pays for extra hours without pre-approval
  • Builders are pushed to select the cheapest materials and least expensive suppliers that fulfil their constraints in order to compete
  • Once a contract is in place, verbal promises do not bind
  • Work continues as long as no breaches are detected, compounding the effect of the sunk-cost fallacy. Often, it seems easier and cheaper to accept the exploitation of wording than to find an alternative contractor
  • The whole tender process is naturally biased towards those who will do the work for less rather than those who aim to produce the best possible outcome

Agile development could1 break this cost inflation by doing the same thing Christopher Alexander did. The budget could be set, and the delivery should be decided upon. If something costs more than expected, it is made smaller, or something else takes the hit and is reduced in quality/quantity or withdrawn. No one is allowed to spend the excess because usually there is none. And even if there is, it is returned to the entity commissioning the project.

We reinforce this further by following another tenet of Christopher Alexander’s processes: delivering regularly and getting feedback from the client. This way, they also get to see the work in progress and decide what they really want now that they can see the product closer to the final form. The clients don’t pay for what they didn’t ask for. Instead, we offer them the chance to see what they need to ask for.

In most of the projects run in the Alexandrian way, builders and architects worked locally, iteratively, and directly on the site. They were present in the client’s context. The time cost for uncovering a revelation and getting a response was tiny and sometimes non-existent as the client may have been part of the labour force. This waste reduction in the form of zero or minimal response time requires a solid link to the customer with regular feedback sessions or a knowledgeable client representative.

Working on site also meant that the architect and the builders were practising continuous integration and delivery with bricks. They included other aspects, such as continuous improvement of their processes, as they built the necessary tools on demand. They designed new ways to construct with prototypes and proofs of concept. In one project[TPoH85] they even had brick machines on site to create just what was needed and invented novel roofing techniques using thin wood strips and concrete-reinforced canvas to match the available local resources.

All this was just enough and no more. Christopher Alexander did not use these projects to fund some irrelevant side research. Instead, he charged a fair and reasonable fee. He did not overcharge based on his reputation or undercut to get the contract. This would later prove to work against him, as the wider world did not play by his rules.

1

It’s possible, but as we know from experience, organisations claiming to follow agile development practices often use the names, but the actions remain the same.

Modern old way

In 1882, the old way received a blow like no other. Frederick Winslow Taylor was promoted to machine shop foreman and was free to work out a new way to approach getting the workers to produce at greater efficiency. He’s known for the invention of the time-and-motion study, and this approach has introduced a new level of efficiency and productivity to many industries. As a consequence of its success, management has become a more scientific occupation and now impinges on the adaptive nature of skilled workers.

For some tasks, the approach works very well, but even to this day, managers are often ignorant of the principal disparity between physical labourers and mind-workers—the former group primarily works to live and for the community of the effort, while the latter are more strongly driven by mastery, autonomy and purpose.

Later on, Taylor produced a book which includes a comment indicative of his beliefs:

It is only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with the management alone.

— Frederick Winslow Taylor, The Principles of Scientific Management[Taylor1911], p. 83.

When I read this, I picture someone who does not trust their workers. I believe they think of them as children or cattle. I recognise the tone of someone who believes management, with no experience of working with a new technology, will somehow know how best to use it, and that those same managers will be able to direct their workers without ambiguity.

A hundred years have passed, and these words still echo in the halls of management. Those in charge of organisations still generally believe in command and control. They measure outcomes and use cost accounting, forecasting next year’s chickens based on the eggs in this year’s baskets. However, software development has proven problematic.

Software developers consistently underestimated their work and missed deadlines. Something was wrong, and management didn’t know how to fix those unruly techies. Knowing they needed to get on the IT money train, the heads of production were never going to give up on attempting to wrangle them into good behaviour. Yet none of them would even consider giving up their command-and-control approach to projects. In fact, the worse the experiences of the project manager, the more tightly they turned the screws. They demanded more and more detailed estimates and finer task breakdowns. At least they were extremely efficient at creating a demand for printer ink and paper.

This lack of trust led to even worse behaviour. Demotivated mind-workers are a revolting bunch. Those without autonomy started to feel like cogs in the machine. Those lacking the space to develop their skills began seeking job security through other means. As goals grew more ambiguous, purpose and shared vision fled, destroying any sense of achievement. We spent a hundred years increasing efficiency and a hundred years disintegrating pride.

I find the whole story so sorrowful. The old way generated motivated workers; the kind we want to build projects around in an agile environment. When you enable skilled and motivated people to do their best work, give them clear goals and let them find the best solution, you always get a better solution than if you tell them what to do. Christopher Alexander’s team worked like this, and their clients began to see how motivational it was. When the project is goal-driven—purpose-driven to satisfy a real and present client—it’s propelled by virtuous pride. We’re driven by the pleasure we take in seeing a smile on someone’s face in response to our efforts. We do not seek awe but to satisfy and delight, and this is much healthier than being profit-driven.

Another aspect of the old way is the notion of local people helping where they could and training to support themselves in the future. Christopher Alexander followed this approach in the Mexicali project[TPoH85]. The people who eventually lived in the homes were also part of the labour force.

In Team Topologies[TT19], we see the concepts of stream-aligned and enabler teams. The idea originates from looking to reduce the cognitive load for others by ring-fencing individuals who are knowledgeable in a sub-domain so they can be available at short notice. In a large project, we generally align sub-teams with specific product features; these are the stream-aligned teams. When their members have to learn how to do uncommon but mandatory activities, it can become a form of waste. Whether you are working on a computer game or a word-processing application, no matter what sub-project or feature you are working on, you must set up and maintain continuous integration, static-code-analysis, issue-tracking, or other technically complex supporting systems. Enabler teams help to keep the stream-aligned teams on track. They act as knowledge-on-demand resources when dealing with tasks that the stream-aligned teams only need to do once.

The locals in the Mexicali project were the stream-aligned teams with a purpose and vision. Those coming in to help them with processes and skills in construction and design were their enablers. The enablers helped the teams achieve necessary but daunting tasks. For example, builders on the enabler team taught the locals how to set up and work with the brick-making machine. The locals did not know how to plan for plumbing or other utilities, but this was a one-off event for each family to get them past a bump in the construction. Enabler teams provide on-demand, just-in-time knowledge. This bears a great similarity to the workshop, where an apprentice and a master of the trade would work side by side, with a specialist being brought in to do the irregular but occasionally necessary work. When we form and maintain a team under agile development, we forget about this spontaneous need for knowledge and often ask a team to grow instead of supporting them. A team should self-organise around a problem but demand the right to remain ignorant when it is more effective.

Some programmers do work this way. They retain the option to work on tools as they see fit, according to their experience of what would make future work effortless, safe, and efficient. Craftspeople would look for ways to improve their work processes when there was a bit of downtime. Leads would chat with their staff over a beer at the end of the working day, and grievances would be aired as they are now, during a sprint retrospective. During a lull, they would sharpen their saws. During a longer one, they would invent a new tool altogether. However, this required slack. Without slack, there is no time for reflection or improvement. The time and motion studies took it all away. In so doing, they wasted those innovation-generating skills of the workers. Have you ever noticed how programmers who have slack and autonomy often have the best suite of tools?

Not causation

Though agile development practices and Christopher Alexander’s processes have many parallels, it is not accurate to state that Agile was directly brought about by his work. The movement away from Taylorism and towards a closer connection between client and worker occurred concurrently across many fields. Usability was an up-and-coming technology around the same time, with many processes becoming formalised and documented for public consumption, such as Usability Engineering[Use93] by Jakob Nielsen and The Design of Everyday Things[DoET88] by Donald Norman.

Agile principles emerged when like-minded individuals observed management’s response to the software crisis in their organisations and the workplaces of others. It was a response to those regressive actions and was a solution centred around practices that were conceivable at that juncture. The strength of the Agile Manifesto[AM01] stems from a particular lack of strictness about its solutions. The presence of simple preferences over laws solicited many interpretations and made the migration more palatable for managers who wished to continue commanding and remaining in control.

Because the link was weak, the Agile movement did not incorporate various aspects of Christopher Alexander’s theories and processes. Some remain missing because the movement never introduced them, but others are missing because they faded away. Perhaps they disappeared because we failed to understand their value, so these activities were not deemed worthy of protection. However, a more potent force may have removed them intentionally.

The demand for fiscal control was only minimally adopted. Agile development has almost entirely dropped this element now. The practice of bringing the customer into the work process has diminished along the way, with many contemporary interpretations of Scrum1 assuming product-owners to be both sufficient and necessary while neither is true. Many other aspects of feedback garnered through presence at the site have been lost, and the approach of taking small steps has been diluted into sprints or iterations of delivery rather than opportunities for direct and instant feedback.

Although we cannot trace the Agile Manifesto directly back to Christopher Alexander’s work, the recognition of software design patterns can. Indeed, the concept of software patterns may have lain dormant for decades had it not been for Ward Cunningham and Kent Beck picking up on the connection in 1987. They approached a design problem the way Alexander did by developing a language for the project. They used patterns to guide novice developers2. This original work on design patterns in software development was more closely aligned with Christopher Alexander’s processes, and the simple pattern Collect Low-level Protocol was closer to an Alexandrian pattern than any of the GoF’s examples. Deviation from the principles of A Pattern Language[APL77] is one of the reasons why design patterns failed in the way they did. But this chapter is not about failure. That’s what the following one addresses.

1

Scrum is a process aiming to improve workflow for teams. Meetings guided by the process include refinement, planning, standups, and retrospectives. Roles include the developer, who can make an impact on the product directly, the scrum coach, who facilitates the meetings, and the product owner, who stands in for the real customer. Product requirements, often described as stories, are worked on by developers, otherwise they are kept in the product-backlog.

2

The full report is in Using Pattern Languages for Object-Oriented Programs[UPL87]

Did Patterns Really Fail?

Some may think it obvious: design patterns did not fail. However, whether they failed depends on what you mean by ‘design patterns’. In some interpretations, they have been and continue to be wildly successful. In others, they have faded away and no longer exist in mainstream consciousness. With so many definitions of what a pattern is and so many of them agreeing on patterns being related to the GoF book, can we at least decide on whether any of those patterns failed? And to what degree and in what ways?

Design patterns aren’t design patterns

There are three reasons why the GoF patterns aren’t patterns in the Alexandrian sense. The first is the simple and acknowledged fact that their patterns are solution-oriented rather than problem-oriented. The second and more contestable point is that they are idioms of the programming languages of the time. The last is the concern that they aren’t alive. We will discuss this final point in greater detail later, but for now, consider aliveness to be the properties of self-starting, adjusting, and adapting as necessary. Living patterns engage with their neighbours, making it easier for surrounding elements to find satisfying configurations.

Solution-oriented

For many patterns, we need to introduce a starting point: the problem. All the GoF patterns are visions of a final form—a ready-selected solution. The process by which a problem is understood and resolved is the lesson we want to teach, but a solution doesn’t do this. Alexandrian design patterns are problem-oriented. They are about wisdom and experience when faced with problems encountered many times before. Solutions tell you what it looks like when it’s done, but people learn best from questions, counterexamples, or stories with conflict. People grow the most when they are invested in outcomes and aware of the stakes—when there are unresolved forces.

Even if they were patterns, not idioms, the GoF patterns lack the problem orientation of Alexandrian patterns. They lack the required tension.

All A Pattern Language[APL77] patterns are about repeating problems, not repeating solutions. This difference is critical because patterns aren’t simply advice on what to do. Patterns give you the structure and prep work for the questions you must ask yourself when encountering a problem. All the patterns in A Pattern Language provide a plan for a solution, but the opening section of each pattern is a recognition of a problem, with context and forces.

The Strategy pattern might be an actual pattern in software design, but it’s presented as a solution to the problem of wanting to specify behaviour by a parameter. What’s missing is a context in which you might want to transform existing code to such a spec. A pattern should be a solution to a compelling problem—a set of forces demanding that you deal with them. Without the context, Strategy is too similar to a variant of the Factory Method, State, or Command pattern, and it can be difficult for someone to see when you might choose one over any of the others.

Indeed, you could claim that State and Command are both extensions or specialisations of Strategy. The only requirement for State is a contract between the state object and the context object defining responsibility for change. For Command, you only need to ensure the Strategy objects contain the information about the receivers of their effect. The issue here is that the problem of generating commands and handing them to the invoker is the same problem as selecting a formatting algorithm and offloading it to the line-ending calculator. The problem gains nuance as you move to dynamically changing strategy objects, such as when you think about behaviour as state or move to include extra information about how to carry out the strategy, such as with Command. What’s gone wrong with this approach is how, by writing these as separate patterns, we have lost the option of considering strategic or stateful commands. A command could act differently based on its state as part of the strategy or template method. Separation of concerns isn’t always a good idea.

Idiomatic

But let’s talk about idioms. Some patterns are examples of how things were done at the time. To my mind, Memento, Bridge, Builder, Mediator, and Visitor are all examples of solutions that aren’t self-forming. At some point, people just agreed to work that way. Other alternatives exist, at least for the problems they intend to solve, so the patterns feel idiomatic of the languages at the time they were documented.

Idioms aren’t patterns because they’re not repeatable discoveries. They were decided or imagined, not a spontaneous consequence of their environment. For example, Memento is meant to provide a way to store the state of an object under change without violating encapsulation. The solution asks for a wide interface for the originator and a narrow interface for the caretaker. However, there is little in this pattern not dealt with by an object serialisation interface, generic for any type. It need not be aware of the originator to fulfil its role. In some languages, this is a given feature. The Memento pattern seems like an idiom that is looser than the relatively commonly understood but undocumented pattern of Serialisation.

What can reach beyond is a pattern that considers the problem behind the inclusion of Memento: the problem of state recovery. Consider a system that has copy-on-write capability and uses immutable data structures. In such a system, we can interpret Memento as an object that holds onto a reference to older object states. Unwinding the state is a trivial case of recovering the previous reference. In other systems, an object from the originator may not be enough. You may need more context to undo any changes.

If we consider Singleton, you might think this is a typical usage pattern appearing in many different places, but the form of the solution is idiomatic. Better solutions of when and how to instantiate the solitary object exist and would be preferred. But the Singleton pattern provides a stable, understandable, idiomatic, and wrong option for all to use and copy, and as with many idioms, it’s not quite bad enough to cause problems most of the time.

Not alive

The last point of contention—the claim that the patterns are not alive—is a little more subtle. Given how difficult it is to find evidence of lack of evidence, I make the following argument with limited confidence. The design patterns in the GoF book aren’t living patterns.

In this case, we’re interested in whether they are self-starting and neighbourly. The requirements for aliveness are contrary to those of an idiom. A pattern must come into existence of its own accord, usually due to emergence from a context, not simply agreed upon. You prove liveness by finding the pattern presenting itself as a novel, locally sourced solution at various times and places. In physical architecture, multiple spontaneous incarnations must be sighted for a pattern to be recognised as such. The majority of the GoF patterns seem lacking in this respect.

Outside of the group members who collected them, many listed patterns only have limited evidence of prior existence. For a problem-solution pairing to be a pattern, it should be self-forming. Not finding examples matching these patterns in the wild, outside of any environments where the GoF had influence, would indicate they could have been memes or idioms of those specific developers rather than patterns.

This is not to say the GoF patterns are not self-starting, just that there’s little evidence that many of them are. The Iterator, Strategy, State, Template Method, Flyweight, Factory Method, Prototype, Adapter, Observer, and Interpreter are all patterns you can see in code that is design pattern blind if you know what you’re looking for. For example, the C++ Standard Template Library includes and extends the core concept of an iterator as an object to a remarkably flexible final form.

But, and this is a big one, try collecting all those examples together. Find someone who does not know about the GoF patterns and ask them to sort them into groups. You might be surprised at how they collect them into different sets than someone who has read the GoF book[GoF94].

Beyond self-starting, there’s the aspect of linkage with other patterns and elements. The patterns in the GoF book are all independent and separate except where they seem dependent and inseparable. Patterns are neighbours, neither coupled nor isolated. The GoF patterns do not automatically support each other. The patterns in Domain Driven Design[DDD04] strongly support each other but are not so tightly coupled that they become part of the same pattern. The first POSA[POSA96] book refers to pattern systems and collects them together as an Alexandrian pattern language.

1

The Pattern-Oriented Software Architecture[POSA96] and Pattern Languages of Program Design[PLoPD95] series in general, but also Domain Driven Design[DDD04] and the contents of Patterns of Enterprise Application Architecture[PoEAA03] define levels for pattern application.

Unchanging

Christopher Alexander demanded we raise the practical questions about our projects—the questions asked by those who must live with the results. Things change, and people’s lives change, too. Our relationship with space changes over time, as do the materials we use for construction and the choice of location we wish to develop. This was the reason the traditional methods of building and architecture began to fail and what prompted Christopher Alexander to write Notes on the Synthesis of Form[Notes64]. Modern techniques fail even though they adapt to change, but change has already dealt a blow to the traditional approach.

Any patterns we adopt must be part of a process by which those patterns can be reviewed and improved in later years. This is why we should challenge the GoF book[GoF94] in light of our findings about complexity and simplicity. The book requires an update because better solutions to the problems it addresses exist, and those solutions don’t fit into the book’s structure. The update should include warnings about the issues that were found when some patterns were introduced. New solutions replacing old ones must be presented clearly, not overshadowed by a popular but ageing reference.

The need for such a process of review and refinement is why I am not happy about suggesting replacements for the GoF patterns in this work. I want to introduce the idea of replacing Interpreter with the twin patterns of Specification and Monad and move away from the older description to make the value more apparent. But the future holds different problems and I do not have unlimited foresight. In some places, we need safe code over flexible code, in which the pattern No Unsafe Paths plays a part. We need faster code in others, in which a suite of data-oriented design patterns1 would play a role in resolving forces of wastefully slow software. Replacing the GoF book would mean including too much or not enough.

The patterns of development will change as the languages we use change. Many books showing us how to develop with these languages have already been published. Some programming languages aren’t even upfront with you about the fact that they are languages but they have patterns all the same. For example, who would say developing a Node.js application is the same as writing vanilla JavaScript for a single webpage? And because there are different dialects of these languages, specific patterns have emerged. There are patterns of working within any common language or framework. Popular game engines have their own patterns alongside the idioms. Hardware constraints also lead to patterns. Embedded, high-frequency trading, and mobile development are fertile ground for the emergence of new patterns.

And that is the point. Patterns emerge. They may be eternal forms, but they can also be forms of a transient environment. New patterns may be as discoverable as the processes for multiplication or making rope. Remember, even these patterns were discovered with rational thought and organic matter.

1

Sorry, the book in which I shall be including them is not yet written.

The abundance

Before the GoF book[GoF94] was released, patterns were present and known about by quite a few people. However, the published design patterns were not clearly labelled as such. Some were described as processes, others as guidelines for a project. But discoveries were soon labelled as patterns after the GoF book was released. It’s as if the presence of the book legitimised the terminology of patterns for software engineering. However, the quality of these published patterns varies considerably. Those published in the PLoPD books1 went through a shepherding process, so they should portray the better-curated side of the movement, but even they are lacking in some surprising ways.

The shepherding process helped people write up the patterns they had found. It was intended to guide a pattern author to produce a text that fulfilled the criteria of a design pattern and provide feedback on whether the pattern was faithful to the concept. In practice, this was achieved in a fashion comparable to a writers’ workshop. Any pattern that went through the process should be free of obvious mistakes and, by virtue of having come out the other side, should be worth reading if you encounter a similar problem.

As a modern reader reading these patterns, many strike me as both hopeful and naïve. Many of them aren’t even patterns. They suffer from similar problems to those of the patterns in the GoF book. At the time, the process of finding patterns and the form of patterns were still only vaguely understood by many. And there were other problems. Due to their nature as a collection of proceedings or from different books with different authors, they are inconsistent in their presentation, content, and size. All of this adds to the difficulty of using and reading them.

The patterns presented in Christopher Alexander’s A Pattern Language[APL77] are many—the book includes 253 patterns in total. But they are at least all readable, consistent, and openly aware of their validity. This last point is made clear by a notation on the patterns: a single or double star, or the absence of any. Alexander and the other authors were very aware of how difficult it was to be sure about whether a pattern is good and wholesome.

In this sense, each pattern represents our current best guess as to what arrangement of the physical environment will work to solve the problem presented. The empirical questions center on the problem—–does it occur and is it felt in the way we have described it?—–and the solution–—does the arrangement we propose in fact resolve the problem? And the asterisks represent our degree of faith in these hypotheses.

— Christopher Alexander, A Pattern Language, p. xv.

The term hypothesis is used to describe each pattern, pointing out their stance on their discovery. Perhaps also indicating a slight hedging—a self preservation if it turns out they were mere wishful thinking. Even so, they were conservative in handing out their approval of the patterns in the book. Very few attained the two-star status reserved for those patterns describing invariants found in all wholesome solutions to the problems they framed.

An abundance of content is not always a good thing. In the realm of software design patterns, it was not beneficial for pattern authors or consumers. Without curation and strictly enforced well-defined guidelines, many submissions were published regardless of whether they were usable, correct, or even found in real life.

As I researched these older patterns, I became disheartened and felt some developers included these hopeful patterns just because they wanted them to exist, thinking it would be nice if they came true. I can understand the thought behind doing so and how a charismatic author might weave a good tale and convince a group or shepherd to sign off on the work. Other patterns appear to show off what the authors knew or were able to solve. These patterns anger me. Tainted by ego, they are all one-off solutions and never patterns. I believe their purpose is not to teach but to stake a claim.

Reading these early patterns, you get the impression that these egoistic and hopeful specimens were taken from samples of one or fewer. Sometimes, they sound like an idea of something that might exist. Writing down what you hope to be true is a mistake many non-fiction authors make, and I hope to avoid doing so here.

As the movement aged, and with later books, the quality and pattern-ness of the published patterns increased until the last books comprised only very well-curated content. The last PLoPD[PLoPD5-06] book contains some outstanding patterns. I have trouble finding fault with many of them, even though most still seem to be solutions to specific problems rather than principles for generating solutions.

Because of their abundance, even if all of the published patterns were proper patterns, their users would suffer because there’s too much to take in. Part of the charm of the GoF book is that it only requires you to learn 23 patterns. With thousands out there, where does one start? Which pattern collection do you browse first? Whose patterns are foundational? Whose would be helpful in your area of development? Again, this is a problem stemming from abundance without curation and cataloguing.

Our modern world has countless songs and films, and finding one to suit our tastes is undemanding. With patterns, it’s a different story. Is this simply the difference between entertainment and engineering? I don’t believe so. The problem is how we search for what we want. An abundance of movies is not a great disadvantage for the consumer; there are genres, and people know what they want. With patterns, the genres are unknown and sometimes, we’re too ignorant of our need to know we should even begin the search.

So, while literal abundance is not to blame, without the necessary protections, it became a breeding ground for many other problems. It was likely the root cause for why many better patterns were lost amidst the noise before it all turned to silence.

1

The first PLoPD[PLoPD95] book came about as a product of the Pattern Languages of Program Design conference. Subsequent volumes were published off the back of later conferences and submissions. The last book, though published after a gap of seven years, was still a collection of patterns from the very same conference series.

Loss of meaning

The most subtle yet deleterious failure was the deviation of meaning to that of an idiom. Patterns are very different from idioms. They would not have been as fascinating to Christopher Alexander had they carried the same value. However, given how so many design pattern authors have presented idioms as patterns, the distinction between the forms may not be sufficiently clear. To help, I shall go the extra mile and let the cat out of the bag by explaining what an idiom is, why it’s different from a pattern, and what unique values it brings.

Idioms are natural expressions of an intent by those fully embedded in the medium. For native speakers of a language, it means that words in contexts gain a new meaning. We can differentiate idioms from other word forms by their unique nature. For example, I did neither release any feline from a container nor travel any considerable distance when writing the previous paragraph. Those words, in those configurations, mean something different than their literal meaning.

Idioms are agreed upon, whether consciously or unconsciously, by the group using them. An individual is unlikely to spawn an idiom. They are not a ‘found’ thing. They don’t naturally reoccur, because they are an element of the momentary culture. They are the acceptance of an idea rather than a strongly self-selecting thing that forms from the properties of an environment. Because of this, we can question why an idiom exists and who it’s for. These are clues to the idiom’s value to us both now and in the future.

In programming, idioms are the accepted or preferred way to achieve goals in those languages or paradigms. This latter point is why some classify the GoF patterns as idiomatic. They are examples of solutions to repeating problems but for object-oriented design alone.

Programming idioms can communicate an intent through a set of steps. If you think of steps as verbs and the objects upon which the steps are taken as nouns, it’s easier to see the link. Idioms are domain-specific in the way patterns are specific to contexts with particular properties. But there are many domains for idioms to cling to.

  • Idioms of compiled languages
  • Idioms of functional languages
  • Idioms of C++
  • Idioms of programming in your specific team
  • Idioms of academic programming
  • Idioms of high-performance programming

Each of these domains can overlap with the others. There are potential idioms of programming in your academic team of high-performance C++ programmers using a functional style. There are so many domains where idioms might occur that it is implausible to list even a tiny fraction of them. Think of them instead as modes of thought—fields in which you can assert invariants or immediately know what would be considered valuable through that lens. Another way of thinking about idioms is as a form of etiquette. They can be rules, but they are artificial even when beneficial.

An idiom, like a word, extends the language. It’s easy to recognise and comfortable to work with, as long as you’re part of the group that understands the in-joke. Because of this, we prefer to write idiomatic code. Code is more easily seen as correct and inflicts less cognitive load when its appearance matches our expectations.

So, why are these not patterns? After all, patterns are often referred to as extensions of a language. Pattern languages communicate relatively larger ideas through smaller, more commonly understood ideas. But patterns and idioms, for all they share, have some significant differences.

Patterns aren’t always constrained to one location or level of scale. They will change in appearance at a different location or level but can be the same fundamental composition of interacting forces in a context. The context does not have to be limited to a specific level of detail, language, or medium. For example, the physical building pattern 159 Light on two sides of every room creates places with abundant natural light. People can see things clearly in rooms with little glare. We feel good in places like this because we can observe the small movements on people’s faces and, therefore, feel better connected to them. This same pattern plays out when using lighting rigs in film and TV. When we want to engage an audience with a character, we light them with little or no glare so that the slightest emotional details are perceived. It’s an entirely different level of scale, but the pattern forces and the context match.

Patterns are self-starting. If they didn’t exist already, they would naturally occur under a different name when the same forces arose. In comparison, idioms tend to be cultural elements. Sometimes, an idiom is something that came along at the right time to be accepted as the right way. There’s also the aspect of idioms being a culturally enforced right way, not reinforced by environmental feedback the way patterns are.

A pattern is a word in a language because a word is needed to describe what it does and where it fits in the grammar of the whole space. Idioms are more like the word choice, spelling, or sequence of words used to realise a solution for a specific in-group. If you know C++, you may have encountered the erase-remove idiom. It’s an idiom of deleting elements from a vector1. However, the algorithm for removing elements from a vector—by shuffling them down and only resizing the vector itself when all removals are complete—is a pattern. The solution to the problem of erasing elements from a vector is a self-starting pattern reoccurring in different languages and paradigms. How it (the pattern of vector element removal) is implemented in C++ with the STL is with a two-word sequence (the idiom of erase-remove).

For another example, we can consider astrology. No, really. Keep reading, please. At the root of astrology is the recognition of shapes in the stars we see in the sky. I’m not particularly interested in which sign is rising or where the moon was when I was born, but I am interested in the concept of constellations. It’s not something unique to any one particular set of people. Constellations are a pattern of human objectification of the stars. However, the specific stars and the formations we named are idiomatic. Each isolated culture had a distinguishable set of relevant clusters of stars. Some overlapped, but many didn’t. Constellations are a pattern of perception, but each zodiac variant is an idiom.

Because an idiom is deemed valid by social contract, they are unlikely to persist as long as patterns. In contrast, a pattern is considered authentic only by judging its intrinsic virtues, such as whether it’s autopoietic2. The nature of idioms often leads to their destruction, as the influence of new language users overrides tradition.

Now you can understand why, despite some brilliant people claiming that all the GoF design patterns are idioms, they are wrong. Some pattern are entirely self-starting from requirements in the domain of object-oriented development. If you attempt to find solutions to design problems and limit yourself to objects, you will, in most cases, come to some of the same patterns as those recorded in the GoF book. Like the bow and arrow, they are patterns forming of their own accord, without requiring outside influence to rouse them. When you search for a way to decouple behaviour from the trigger while constraining yourself to using objects or try to recreate function pointers in object-oriented design, the natural solution is to create an object representing a function call, which is the Strategy pattern in a nutshell.

But idioms can also be fantastically useful. Gathering them and presenting them to those new to the language can hugely impact their ability to comprehend and contribute. Without idioms, you must figure out what each set of words means each time you encounter them or invent a new sequence satisfying your meaning. But idioms are local, in time and space and purpose. Having so many idioms in the catalogue of patterns made it very hard for anyone to understand what a pattern should be. This led to the further dilution from idiom to technique.

As the publications on patterns continued, new areas opened up for pattern coverage. As the breadth increased, the understanding of what patterns are became idiomatic. We started seeing design patterns categorised as any recognisable recurring form in a design. In user interface (UI) and user interaction design, the meaning drifted further until we witnessed UX design patterns, which are all idioms in a pronounced way, as they depend on history and UI literacy. If they were patterns, someone new to computers would not have to ask what the three seashells means, I mean, what the three-line hamburger button means.

When you look at recently released design pattern books, they are often just lists of techniques. Some examples I have on my bookshelf include the MapReduce Design Patterns[MapReduce12] book from 2012, React Design Patterns and Best Practices[React17] from 2017, and Machine Learning Design Patterns[MLDP20] from 2020. A few patterns in each book might be patterns, but the books are dominated by techniques and solutions. With that, it’s clear we have liberated the term ‘design pattern’ so it can be used to describe any book as long as it has a collection of handy tools for solving problems in a domain. Well, that’s unfortunate.

1

A std::vector is a dynamically sized array for non C++ folk.

2

The book Autopoiesis and Cognition[AaC72] was a mind-changing read for me. Autopoiesis means self-creating and self-maintaining. Patterns are both.

Unfindable

When hunting down a pattern to use, you will not locate a well-defined or widely recognised index to them all. Many are poorly named or hidden inside another language behind a further layer of contexts.

Naming patterns is tricky. Some of them have names that seem utterly unrelated to their purpose1. However, knowing which one you are looking for is difficult, even when the name perfectly captures its essence. Because we named software design patterns after their solution form, they dislocate patterns that are united by a common cause but join those solving tenuously connected problems. This is how most software patterns have been catalogued, so the problem surfaces in most examples. It’s even true for a handful of the patterns in A Pattern Language[APL77]. It’s a real shame we misunderstood this aspect of cataloguing.

To understand what it means when I claim design patterns are typically presented using the wrong aspect, it’s worth considering a made-up pattern related to real-life architecture and buildings. I will show how something—when described as a pattern from a solution rather than a problem perspective—becomes an unwieldy mess. I aim to align with the style of the most famous software design patterns and show how it misses the mark. For this exercise, I will take an elementary example—a pattern for passing through a wall.

Who would say the pattern of a door was not a design pattern? It’s a structured solution to a recurring problem, used repeatedly. So surely it is a pattern, right?

Fake pattern: Doorway

You very often find the rooms of a house, or even the house itself, quite difficult to use if there are no entrances. People have been known to use windows to gain entry, but those people are often not the intended users of the house at all. For the people who own the house, a mechanism by which they may enter without causing damage to the walls would benefit them and the house alike.

This way of passing through a wall is called a Doorway. It takes the concept of a wall but makes it an active and separate entity that provides safe passage without damaging the wall into which it is embedded. This dynamic wall (from hereon called the Doorway) is a compound solution made of two parts, the frame and the door, and moves on hinges or runners. In some circumstances, you may not even need the Door element of the Doorway and simply use the frame to create a portal between rooms. The pattern of Doorway is not limited to static buildings. We see uses for the Doorway pattern on cars and trains, as well as on less solidly built structures such as tents and fields for cattle, despite those walls being less expensive to repair.

Some Doorway implementations will lead to the outside and the public world, while others will bridge the gap between internal rooms. The Door element in a Doorway to the outside may need to be:

  • Resistant to weather
  • Resistant to attack
  • Secured against prying eyes
  • Resilient to accidental damage incurred by collisions with other vehicles
  • Have low wind resistance when used to separate open spaces such as fields

The level of weather resistance must accord with whether the Door needs to survive the environment or whether the Doorway should keep the weather from passing through and into the structure’s interior.

The material for a Door depends on its final use, so be sure to determine the strength, weight, fireproofing, and waterproofing requirements before settling on a design. Any Door on a boat or submarine has critical constraints in this regard. Other necessary material properties, such as transparency or flexibility, may also apply. An inflexible Door to a field or garden may become brittle or warped and unsuitable for use because it no longer locks effectively. The Door on a greenhouse or a shower should be transparent in most cases. A Door leading to a beautiful outdoor place, such as a garden, should at least be partially made of transparent material; however, you may wish to choose a soundproof material if your children are loud.

Rooms have a range of privacy requirements, and some must be secure, so introduce a locking mechanism. However, the lock may need to be disabled from one side all the time, such as when it’s an escape route. Frequently, the lock need not be keyed; a simple bolt or latch will suffice. The level of complexity in a lock usually indicates the necessary level of privacy, which often correlates with the strength and opacity of the materials used. However, there are some tiny, smelly rooms where a simple bolt lock or latch is sufficient, but they don’t suit Doors made of glass.

Mixed levels of security within a single Door exist. For example, you might like some things to pass through the Door without requiring permission, while others need the owner to unlock the Door. Examples include your mail or postal deliveries, which use letterboxes, or cats, which often don’t. You may also want to consider that even though you might not want someone to see through a Door, you may wish for light or air to pass through, so you can introduce a little window above the Doorway or just have the Door not sit snugly inside the Doorway, such as in restroom cubicles. Fitting letterboxes or cat-flaps to restroom Doors is not recommended.

In conclusion, the pattern of Doorway invites us to introduce a permanent hole into a wall when we seek a way to get from one room to another. We either frame it in such a way that you can tell it was not a construction accident or include a Door with or without a lock, made of a material appropriate for the level of privacy and security.

Hopefully, you at least found this funny, but do you see how it’s a description of doorways, and yet it’s not a pattern? Poor patterns often read like descriptions of the obvious. However, they carry lots of extra details to give it fullness. Here, we talk about doors and their many incarnations, but we don’t align motivation with resolution or consider the consequences for people’s lives. We never got any insight into why and when you would choose each type of variation.

When you look at the original work by Christopher Alexander, you won’t find the pattern of a door. Instead, you will find many patterns that include a door or a doorway as a required element for completion. The door may be described in those patterns, but it’s never the centre of the pattern. Even in the patterns 224 LOW DOORWAY and 237 SOLID DOORS WITH GLASS, it’s the lowness or light transfer that is essential, not the door. Otherwise, there would only be one pattern, indicating only one problem.

Some characteristics may be brought to attention, and some warnings may be laid out about decisions you might make, but there is no pattern of doors in general. Doors are components of patterns. They are the materials or elements of a configuration. Patterns are definitely not just better names for features; if they name anything, they name configurations.

All these details in the Doorway pattern add up to too much to look at. No wonder people can’t figure out what to do with patterns that include these highly detailed descriptions. When building a door for a car, why would you even consider a cat flap? Why would you wonder about whether it needs to be fireproof? There’s too much going on in this description, and yet, it’s also too terse. It’s too much and not enough at the same time. This is because Doorway isn’t authored in a way that benefits people.

Patterns are the identifiable steps you apply to solve the problem of how to build a thing, not the thing itself. This has been a constant problem. What they are is distinct from what you might look for. You look for an unknown but specific solution to a real problem you have but cannot fully describe. You want solutions to complications arising from situations, including information about what else to measure, verify, and evaluate. That’s where the benefit lies in pattern languages. They link together many patterns and expose problems previously unknown to their readers.

Looking for things by pattern names was always going to require more luck and effort than looking for them by context. It’s why we should never have based pattern names on their solutions. If we had identified them by the pain caused by the problem or easily perceivable elements of the problem, I might not have had a story to tell.

When you set out your patterns like the Doorway pattern, with all the possibilities, you end up suffering the same problem you have in overly generic code. Because you don’t have any basic assumptions, there will be too many contexts to find out what could be a nearby related problem. In software, this is especially true where things are so similar that you’ll find that almost every pattern is related to every other pattern until you start whittling down the number of contexts.

Consider a pattern of a stack. If you try to handle all the options, you will overwhelm the person trying to define it. You might have one particular stack in mind, so I will list a few so that you can imagine the breadth of content if you were to attempt to describe it in the manner of the Doorway pattern.

  • An array and a head variable
  • C++ stack container based on the std::deque class
  • A linked list-based stack of objects
  • A stack with varying sized elements, such as is used by the runtime in C and other stack-based languages
  • An auxiliary structure to provide a stack-like interface to pre-existing objects

Documentation for a simple problem like this will explode in complexity for the reader if we fail to differentiate between the kind of stack used at the low level and the kind used at the language level. A lack of context or too many contexts diminishes the value of design patterns.

It’s easy to accidentally start adding corner cases, as options and flexibility are like catnip to programmers. Many otherwise compelling books on design patterns have made this mistake. Indeed, some solutions have many different possible uses. But because of this, when someone begins to document them as a pattern, they strive to find a generic form. Then, the resultant treatment fails to capture the value of each unique situation.

As a takeaway, think about patterns from their problem context alone, not from the point of view of what you should call a pattern solution or what other patterns are similar or related in how we categorise them. Naming patterns is only a worthwhile practice for conversations about pattern selection for a larger design. It would be preferable to name them by context alone, but those names are lengthy and complicated.

If we take the problem as the starting point, things improve. For most GoF book patterns, the problem was often related to coupling or lack of variability. That’s only two apparent patterns. The second is dealt with by nominalisation—the extraction of variability into an object. We could rewrite the GoF book into just three or four patterns, with many examples of how to use those patterns to provide solutions in different contexts.

This naming fault even exists in the works of Christopher Alexander. Some patterns in the original pattern book, A Pattern Language, have charming names describing what they are about and what they reinforce or create. 159 Light on two sides of every room is wholesome and reinforces the importance of diffuse light without ever literally demanding windows. But these names result in a quandary for those hunting down solutions to their problems. You can’t find them by these names. They’re lovely patterns, but it is necessary to read them to understand their value And you often only know their relevance after you have read them. Who would think that 159 Light on two sides of every room had any application to lighting an emotional scene in a film?

Context is a better basis for finding patterns when considering how we look for things when needed. You may wish to look for a real solution to a concrete problem you have right now. In this case, you can only use patterns if you already understand them or understand enough of them to know where to look. Finding the fix for your problem can be taxing if you’re unaware of the pattern that fits. What you have is information about the problem rather than the solution.

As a counterexample, when it comes to sorting algorithms, you know when you need one and where to look for it. You realise you need to regularly find something in your container based on some ordering of the set, or the items must be sorted for processing by a method outside your control. You are aware of your problem, and you know the type of solution to look for. You can do a web search for sorting algorithms, learn about the big O notation if you haven’t already, and then select an appropriate solution based on your expected data. You can either happily implement it yourself or use a library to solve your problem without doubting that the solution suits the job.

Sorting is a pattern, but we know it. We know the name but might not know the correct solution form, so we look up the pattern information (big O, time/space trade-offs, stability) and select a solution resolving our specific forces in the context of needing to sort.

Sorting is one of the many foundational subjects of programming literacy. When I was a child, I worked on a program on a ZXSpectrum in BASIC, and not knowing about route planning algorithms caused me to reach an impasse with my little game project. Lacking even the knowledge of what I was looking for stopped me from finding a solution. I didn’t know I needed a path-finding algorithm to solve my problem. I only knew I didn’t know which way the enemy robots should step to move towards me when I added obstacles to the world. Patterns are analogous to this. When you cannot clearly articulate your problem, finding the pattern to solve it is virtually impossible. We teach a broad range of algorithms in programming courses so you know the categories of problems you could face. You use them as a lexicon in your search for solutions.

Another time we don’t need to know the pattern names is when someone else does. Names worked out fine for Christopher Alexander because the way they worked included a phase in which they built up a smaller language to match the project they were working on. His team found the necessary patterns and presented just these few to the people involved in the process, just like Kent Beck and Ward Cunningham did for those novice developers in 1987[UPL87]. So, we need to add knowledge of patterns at an architectural and leadership level. The architect or lead can use patterns to help guide others to a better solution. However, this isn’t possible because of a problem: unavailability.

1

e.g. The Caterpillar’s Fate. What, you can’t immediately guess this is a pattern for the transformation process from analysis to design?

Unavailable

Design patterns are only available to some people. Yes, the GoF book[GoF94] is available to buy, and online videos cover most of their patterns in detail from multiple sources. But the greater scope of design patterns is not available. I need to explain a few things about availability, and I will start by explaining what it means to be unavailable for an individual. Here are some reasons why a pattern, even if you know the name, might not be available to you specifically:

  • You cannot afford to buy the book in which it is found
  • You are not fluent in the language the pattern is written in
  • You learn best by video, but those available lack accuracy or credibility
  • You can’t buy the book because it is out of print
  • The library withdrew the book out of lack of interest, so borrowing it is impossible

If you don’t know the name of the pattern, it will be unavailable to you because:

  • You’re unaware of other design pattern books, so you don’t know to look further
  • You’re unaware of the relevant book, so you can’t discover the alternative pattern
  • You cannot search for them anywhere to find an appropriate pattern for your unusual or niche problem

A source of information is unavailable to an individual when any of these reasons apply. You might want to learn, but the time investment in finding a good video series is daunting compared to a cheap book covering the essentials.

When researching for this book, I found Christopher Alexander’s works relatively easy to acquire. Most are still in print. Those that were not, were not too expensive to buy second-hand, except for one book[AFo21CA93] on Turkish rugs1. But when it comes to books on software design patterns, many were out of print and only a few of the longest-lived titles were available.

Being out of print and unavailable in libraries is an underlying problem for the software design patterns movement. The lack of interest in the extended pattern collection reduced the print runs of books on the subject. The success of the GoF book perpetuates the printing of its first edition. This lopsided success causes us to see search results such as ‘there are 23 design patterns’ and receive hundreds of hits for videos explaining them as the design patterns of software development.

With so few people knowing how much information is out there and the information being so difficult to get hold of and varied in quality, you can understand why people don’t use these books as a reference more often.

I believe we need a place that contains many more of the patterns we know about. It needs to help people find patterns matching their contexts. I would suggest something dynamic so that we can maintain the patterns, as they need to be safe to use, and safety is relative. But each pattern needs to be organised by the problem a person has and where they have it, not stored according to the funny name for a clever solution some bright spark came up with one day and wanted to share with the world.

1

No, seriously, it really is about rugs. And yes, it’s definitely worth reading if you can find a copy.

Unidentifiable

For patterns, there’s another twist to the problem with their findability. It’s not a real word, but I didn’t have to explain it to you for you to know it meant the same as discoverability. Well, that’s one of the problems with software design patterns. Even when you read or hear a pattern name, you cannot be sure you know what it pertains to. How can someone immediately deduce the meaning of even some of the best-named patterns, such as Factory Method or Template Method? These aren’t bad patterns because of their names, but the names are not helping. However, many patterns have even worse names. Few people understand the Interpreter pattern because it shares a name with another concept in computing.

Furthermore, references collide. All three variants of the Interpreter pattern sit under one name. Both interpretations of Adapter (class and object) are hidden under one pattern. Why?

The ambiguity we bring with overlapping terms is a significant problem with patterns, even when you know they exist. When you know a pattern could help you make decisions, it’s almost impossible to know what it might be called without knowing the name. The accepted categories of software patterns don’t improve things. I applaud the authors of Pattern-Oriented Software Architecture[POSA96] on their decision to categorise by what level of abstraction and what class of problem the pattern attempts to solve. The pattern hunter only needs to sift through a few patterns to see if there’s something applicable from those available at the architectural altitude they are designing for. But some offerings merely present a long list of patterns categorised by usage rather than by where and what they solve. These lists are not helpful, but it’s even worse in the case of a catalogue listing them by name or author. It’s as if there are no categories at all. It’s similar to a dictionary sorted by word length first (hmm, quite advantageous if you are playing a word game, I suppose, but not the right form for finding definitions) or by the order first seen in use. Actually, that last one sounds engaging, like a history of language one word at a time, but admittedly, again, not a convenient reference.

So, we end up with an extensive collection of patterns that could solve our problems if only we knew about them before we needed them. We need to know their names and roughly what they help with before we can properly look for them. It’s like we need to be vaccinated against the problems by reading the pattern books before we begin writing any code. I needed to know that route-planning algorithms existed before I could use one in my childhood game project. But there’s another problem when seeking solutions: problem visibility.

When it comes to sorting algorithms (again), you know whether or not you need to sort something. You can quickly tell you have a problem when the order is incorrect but you would prefer it sorted another way. When it comes to the problems design patterns usually help solve, sometimes the problems are invisible when they first appear. The vaccination comparison is even more pertinent in this case. A pattern often solves a problem lacking visibility until very late in development. In this case, we would never seek out the pattern. Avid pattern readers would only stumble upon it by accident, but that would be serendipity, not procedure. And even then, it would only be fully ingested and recalled when its specific situational applicability turns up if it piqued their interest. Consider how often people think to use the Flyweight pattern compared to how often it would be a useful refactoring. You frequently see complicated value objects in domain-driven design, and yet the Flyweight pattern is somehow overlooked as a solution because the problem isn’t made visible by how developers work.

Names inhibit the reach of a good pattern. When we label things, we constrain them by what we can conceive. We assume their title describes all they can solve and limits our imagination. In C++, the Iterator pattern is locked into the naming of the STL. When you get an iterator from a container, it is assumed you will use it to iterate over the container. This is fine if you are using the iterator for iteration, but the name leads many developers to ignore other uses of iterators in the language. In the case of vector, they indicate a piece of memory, so you can use them for much more. They can result from find operations, reinforced by their usage with map. Generally, however, they are overlooked for other purposes because they have this name, and the common attribute is that of iteration. If they were not named thus and instead given unique representations based on what they were, it could open the mind to different opportunities.

An example of the failure of patterns can be seen at the microscopic scale with the nodiscard keyword in C++. The nodiscard keyword provides a mechanism to raise a warning when you ignore a return value. It’s a solution for two different problems.

One is the problem of ambiguity in function names as to whether they are actions or queries. When the keyword was introduced, the standard was updated, and calls to empty on containers gained the tag. It protected users from ambiguity of intent. This is a problem we have with naming things. Should a function called reverse reverse the container in place or return a reversed copy of the container? Without looking at the documentation, it’s hard to know. The nodiscard attribute came to the rescue.

The other main reason for nodiscard was the criticality of responding to the result. Again, they added the attribute to standard library functions returning resources requiring management, such as memory allocations and thread creation. In other situations, discarding is dangerous, such as when you must act on the result. If you request a lock on a resource, such as a mutex, you should act differently depending on the outcome of attempting to take that lock.

There’s nothing wrong with having both variants use nodiscard, but the problem that is solved is different. In one case, you called the wrong function. In the other case, you called the correct function but did not handle it sufficiently. The const keyword nearly fulfils the demands of the first usage (it doesn’t for free-functions, but for methods, it almost works), but few compilers even emit a warning when you fail to use the return value of a const method. Without phenomena, the coder will continue to use the method ineffectively.

class A {
  public:
    int f() const {
        return 5;
    }
};

int main(int argc, char* []) {
    A a;
    a.f(); // this is almost certainly wrong.
    return argc * argc;
}

// Even with -Wall -Wextra -Wpedantic
// No warnings emitted by clang 16 or gcc 13.1

// snippet available at https://godbolt.org/z/TbnbvYPb5

Patterns can have similar overlaps. When they are solution-oriented, they can resolve more than one problem. Named this way, they’re hard to find, but even when you do find them, you may only have one problem variant in mind. So, you disregard them as a solution to your problem, even though they’re compatible.

Let’s return to the Interpreter pattern. Most developers think of it as a way to handle scripting languages. This understanding of the pattern stops developers from thinking about it as a solution to the problem of the interpretation of a structure. The way the GoF wrote about it, it’s hard to extract this buried variant unless you read the last part many times. They understood its power to manipulate structures but buried the lead. So, their examples looked like interpreters. Because the primary interpretation reflects a way to visit a structure to create some kind of result, and the result presented is not a compound in all but one example, we hardly ever recognise it as a pattern of structure updates.

When the Interpreter pattern was analysed, its repeated manifestations deserved more attention and should have been better dissected. Too many different contexts and forces were allowed in. There are multiple patterns here, but the way it was documented weakened it because it covered too much. Just like the Doorway pattern.

Unusable

And here we get to the final failing. All software design patterns have a problem caused by the structure of their presentation. They manage to neither provide education to beginners nor the ready reference of solutions they purport to be.

Beginners in a subject are unlikely to understand the contexts in which the pattern arises. Most patterns presented take little time to explain the context and forces because they are solutions first. But the trouble beginners have is often the need for more insight into what their problems are. This gap is the reason why design patterns remain both overused and underused by those new to programming.

A beginner needs hand-holding through specifics, and patterns aren’t about specifics. Someone picking up a design pattern needs counterexamples to know what a pattern does not warrant or require; otherwise, you end up with cargo cults.

But even when you are experienced, and even when you do know what pattern you are looking for, they can be difficult to use well. This is because, again, they are presented in the final form rather than as a process to reach it. Having a solution to review is inspirational rather than educational. Having a problem worked through is a much more powerful lesson. For experienced developers, a solution can be enough. But even for the most seasoned, if they are in a rush, it can be tiring to take on the additional cognitive load of converting it into a plan of action. The book Refactoring to Patterns[RtP04] addresses this specific point; nonetheless, all pattern descriptions should have included the before and after and a sequence of steps.

What were the original software design patterns documented for? One purpose may have been to show elegant final forms. This noble goal was likely the main driver behind the patterns in the GoF book[GoF94]. However, I strongly suspect many other patterns were documented, intentionally or innocently, as a form of self-promotion or plumage of developer prowess. We will see later how this ego-driven aspect is at the core of many pitfalls in design patterns, software and otherwise.

What Are Patterns Really?

Now we get to the deeper theory behind why things went as they did. The basis of many problems with software design patterns is rooted in how so many aren’t patterns at all. Before we attempt to analyse this impact, we need to be well-versed in how things could have been. We need a deeper understanding of what patterns are, what they are not, and how they surpass idioms and techniques.

This chapter will start with the idea of patterns as wisdom. Then, we move on to the form of a pattern—how it reflects its final state and the process by which it accomplishes the task. Patterns are actions, and their actions are always about resolving some forces in a context, so we continue with a section on what this means.

We can then talk about the importance of relationships. Patterns were born from the idea that relationships are the critical aspect of complex design. Without thinking about patterns as part of a larger intertwined configuration, much of their value is lost. We then move on to how these connections explain why they are eternal, not idiomatic or momentary. We conclude by analysing the claim they emerge without being designed with intellect, that is, they are most often found, not invented.

Wisdom, not solutions

If knowledge is knowing how to do something, wisdom is knowing what doesn’t work. Wisdom is experience-based, and the most persuasive experiences are negative feedback. In essence, being wise is learning your lessons and not making the same mistakes again. It doesn’t stop us from making new mistakes, but at least they’re new.

Design patterns are like that. They are often called recurring solutions to problems because they stipulate the invariants of any high-quality solution to a specific problem. Wisdom helps narrow down your options. X is impossible because of Y unless you have Z. These problem-specific constraints help guide someone to a solution. But they are not sufficient to generate a solution, only steer the process. That’s why patterns include examples—as inspiration for what you could do. There are some general motivations for deciding on solutions, such as:

CompatibilityDiscoverabilityDebuggabilityExtendibility
ObservabilityPerformancePortabilityReliability
SafetyScalabilitySecurityUsability

Patterns have warnings in them. They are adaptable in relation to regularly neighbouring patterns. They have principles that have been revealed during many trials and many errors.

Initially, patterns emerged from trying to bring back a straightforward approach. Traditional methods were attractive because of their lack of apparent complexity. Their ability to locally adapt to environmental changes was an attractive feature for Christopher Alexander. But, when he analysed traditional methods in Notes on the Synthesis of Form[Notes64], he found situations in which they fell apart.

The Slovakian peasants used to be famous for the shawls they made. These shawls were wonderfully colored and patterned, woven of yarns which had been dipped in homemade dyes. Early in the twentieth century aniline dyes were made available to them. And at once the glory of the shawls was spoiled; they were now no longer delicate and subtle, but crude. This change cannot have come about because the new dyes were somehow inferior. They were as brilliant, and the variety of color was much greater than before. Yet somehow the new shawls turned out vulgar and uninteresting.

— Christopher Alexander, Notes on the Synthesis of Form, p. 53.

Traditional techniques were stable but had no insight. He couldn’t use them as they were. Tradition is wisdom without awareness. In contrast, intellectual design is awareness without wisdom. Awareness-driven decision-making is limited by mental capacity. These two contradictory solution methods contributed to an apparently unsolvable problem for designing complex forms in the modern world.

What Christopher Alexander wanted to do was create a solution to the two problems of modern buildings. He wished to resolve tradition’s inability to keep up with modern techniques and a rapidly changing environment—because tradition, for all its benefits, becomes ineffective when exposed to rapid progress. He also sought to solve the problem of modern architecture’s inability to handle the inherent complexity of building large, complex structures. After executing a plan from a modern process many worrisome stresses and tensions are left unresolved. Alexander needed a way to construct wisdom from thin air.

The outcome of Notes on the Synthesis of Form was the appearance of patterns. A way built upon understanding the categories of mistakes made by the modern process. A way to reveal new traditions by calculation and intellectual processes. Patterns were discovered wisdom because the mistakes modern architecture was predisposed to making, the things you ought not do, were almost always about unexpected negatively consequential relationships between elements, that is, relationships where changes to improve one part degrade another.

Instead of tackling the problem of complexity head-on, he concentrated his efforts on the category of mistakes. The approach centred on architectural elements from the point of view of relationships of change, focusing on changes affecting multiple elements in opposing ways.

The natural way to decompose a system is by what you know, things you already have names for. In Notes on the Synthesis of Form (p. 127.) Christopher Alexander suggested that when diagramming the decompositions, you should ‘resist the temptation’ to use ‘well-known verbal concepts’ for the new change-connected chunks but instead diagram them so that they don’t lose their power under the influence of pre-existing biases. The reason precedes it: each diagram or name must be both complete and unadorned with any accidental requirements caused by ambiguity or prejudice.

By starting from this different position, he found a way to elicit wisdom from the context and the forces at play. Collecting elements by their impact on each other was one part; how they affected and revealed otherwise unseen compounds made manifest only by the relationships between elements was another unexpected important discovery. These implicit compounds were the first sighting of patterns. They sprang forth from the contexts and forces and were the foundation of the new way of thinking about building.

Without a surrounding context upon which to pivot, and without forces to guide its necessity, a pattern will not emerge. There can be no mistakes if there is no will to induce change. There can be no mistakes if there is no context by which to judge the decision. These points are why most true patterns are wisdom. They contain the impact of mistakes such that they will not be made again.

When you think of patterns as solutions to recurring problems, there’s a problem. They aren’t solutions; instead, they are how any viable solution can be safely reached and verified. When well presented, they capture common structure, behaviour, and control flow in particular domains of any good and valid solution. They’re not so much about capturing the details of solutions already seen but revealing all the regrets you could have had if you hadn’t abided by the restrictions and limitations found by those who walked the path before you.

Good pattern definitions should contain restrictions and limitations guided by positive aspects presented as principles of a high-quality final form. They can be lists of things you would be unhappy about if you hadn’t considered them. Not so much how to do it, but more the kind of story that starts, ‘We know what you’re gonna try to do, so here’s what you need to look at’. In effect, patterns must be stories of your future. Someone else has already gone down your route. They are reporting from your possible future about your plans and how they worked out—even though you haven’t yet taken the first step.

And because they are wisdom, patterns are never about cool ideas. They are always the wisdom captured by others who went through similar situations. Tales told about what happens when you live through it. They’re distilled anecdotes. They’re wisdom. They’re warnings.

Again, patterns aren’t solutions. The solution is usually pretty obvious, but it has to be coached out of you by giving you the right questions. Unfortunately, given the present form, you have to know what the solution entails before finding the ideal pattern to consult. But when you’ve located the pattern, you can use it to better understand your problem and uncover other snags and consequences you hadn’t yet noticed. What else about your solution aren’t you considering? A pattern helps you change your mind. You can be rewarded by learning new constraints, which inspire new opportunities.

Patterns aren’t tools for construction. They’re a way to organise how to use your tools. They’re not reusable techniques for software design as in the verb ‘to design’. They’re reusable wisdom of systems that spring into life around a solution. They provide insight into maintainability, as well as what solution system pairings might look like and their respective consequences.

Innovative patterns

However, not all design patterns seem like wisdom. They might appear to be inventions at first sight, something you wouldn’t easily come across. These patterns are epiphanies, but they are regularly self-forming. They are the inevitable inventions of people put in a context with certain forces. You might wish to think of them as belonging to a slightly different category because they take an inspired leap. They’re not the patterns of trial and error; they’re inventions with some insight and higher thought.

Invention patterns include the use of fire and the bow and arrow. We invented weaving, rope, knots, and sewing many times over. Mathematics and logic emerged from contemplation alone at many points in space and time. These aren’t quite natural patterns, because they need thought to bring into being, but they keep emerging anyway. We refer to them as patterns because they also resolve forces in contexts, but these are the patterns that are often structured solutions. They include a lot more ‘do this’ than ‘avoid that’.

These patterns can appear in nature. Examples such as the eye with a lens to focus might seem like a leap from the pinhole. We know nature doesn’t think, so it’s clear these clever solutions can emerge without thought.

There are fewer invention-type patterns in Christopher Alexander’s work, but they are present, such as 251 Different Chairs or 229 Duct Space. Their presence may have contributed to misunderstanding the purer form of complexity-dissolving, wisdom-based patterns. The inventive patterns are more immediately attractive to programmer types and lovers of puzzles.

Indeed, in pattern literature, it becomes undeniable the more you read about them outside of software development: design patterns don’t stop you from having to think. Instead, they invite you to think more acutely. They give you the inspiration and context to understand what to think about.

A process and a thing

Christopher Alexander did not consider a building as finished so much as being an attempt, an approximation of an unattainable perfection[TTWoB79]. Every building had an age and a state of repair. The older a building became, the more likely it needed small repairs or an extension to match its new usage. But, for him, even a new but incomplete building was in a state of repair.

Rigid building designs are brittle and not very likely to survive changing times. They don’t allow for adaptation. Most modern buildings are like this. Consider the different physical space and connectivity needs of offices over the last few decades. In contrast, there was a suppleness to all of Christopher Alexander’s designs. He built each building, component, extension, or improvement so they sat comfortably with what was already there and remained conscious of the need to support an unseen future, greater form.

The same value in malleability can be said of code. Old code is only good if it can be repaired when it is no longer fit for purpose. Christopher Alexander wrote[NoO1-01] negatively about buildings with no scope for extension or capacity to adapt. Compositions that can only be put back into their final state were not living but were arrangements brought into existence by a dead process.

… when you build a thing you cannot merely build that thing in isolation, but must also repair the world around it, and within it, so that the larger world at that one place becomes more coherent, and more whole; and the thing which you make takes its place in the web of nature, as you make it.

— Christopher Alexander, A Pattern Language[APL77], p. xiii.

If code cannot be changed, it’s not good code. If you can only make it work by shoring it up so it does not need to change while everything else works around its failings, then it’s still not good code. More to the point, it was never good code.

Patterns are about solving a problem, but they also contain the process of achieving a solution. As with wisdom, we see a reason behind this. Complex things built through traditional methods were always constructed piece-wise. No successful, large, complex thing was ever made in one step. My feelings resonated with the words of John Gall when I read:

A COMPLEX SYSTEM THAT WORKS IS INVARIABLY FOUND TO HAVE EVOLVED FROM A SIMPLE SYSTEM THAT WORKED.

The parallel proposition also appears to be true:

A COMPLEX SYSTEM DESIGNED FROM SCRATCH NEVER WORKS AND CANNOT BE PATCHED UP TO MAKE IT WORK. YOU HAVE TO START OVER, BEGINNING WITH A WORKING SIMPLE SYSTEM.

— John Gall, The Systems Bible[SysBible02], p. 63.

Complex things at scale are only successfully made with traditional methods when built piece by piece. Because steps are required to build well, patterns must also be processes. They are the wise steps to take to achieve a new, better state, with each state being a distinct solution to an expanding problem.

The thing aspect of this has to do with an essential completeness of all patterns. A whole thing in one notable entity means the pattern describes a complete system of elements supporting each other. The system can be tiny, but it has to be complete and self-supporting. As an example, by negation, we can take an element away. The pattern of 105 South facing outdoors from A Pattern Language[APL77] is a good quality pattern of reserving the best light of the sun for the outdoors of a building plot by giving it the south (or north, in the southern hemisphere) of the land. This pattern consists of only the land and the building. By removing the building from the pattern, it has no meaning. By adding neighbouring buildings, it gains additional but worthless complications. The point is to make sitting outside by the focal building a positive experience. Sitting on a plot by a neighbour’s wall is not relevant to the goodness and wholeness of the construction.

Wholeness means a whole thing in one entity and repairing it to a wholesome state. At each step, the entire thing is good—or at least not worse than before. The south-facing wall has a usable seat, for example. And when we see something as incomplete, it’s not healthy. It needs repair. If we plan to add a shape to the building but it would introduce a shadow to the outdoors, we can recognise that and adjust the plan to fix it before it happens, just as tests tell us our proposed change is not good.

Many things will be unstable when we work on a project, but we can decide to work on those tasks that close the most significant gaps. We can look at the needs of the whole project and tackle the highest-impact decisions first. This way, we start with nothing and repair the project into something we are happy with. At each stage, we make it better, more alive, more whole.

Even though design patterns are processes, they regularly explain themselves by describing an outcome. The manipulations we need to apply to get to that outcome from our existing context would be much more helpful documentation. We can look at how a building plot turns into a building over time with design patterns. Residents change. Their needs change as they go through stages of life, both temporary and enduring. The building will no longer be a perfect fit for their changed needs. New patterns will become appropriate as new forces appear, so the building should be repaired into a new state. In software, we see this with changes caused by revelation or modifications required to meet new expectations of users becoming accustomed to features available in other, related software.

At the detail level, we may migrate to using a Strategy where we previously had a direct function call. A recent additional requirement, the need to configure which function we call, would be a new tension needing repair. Perhaps a switch isn’t powerful, extensible, or safe enough for our latest use case. We need a variable to store the action and a way to inspect or serialise it, so a simple functor will not do.

The context for the Strategy example is a behaviour invoked at a certain point in time. The forces are our need to adjust the behaviour at runtime. So, we observe a new variance. In some languages, such as Python, support for this is built in. But when using C++ as an object-oriented language, we need to use something to let us bind to a function at runtime. To do that, we can take the step of nominalisation: converting a verb to a noun. This step is the process part of the process and a thing.

The nominalisation step is the transformation—the change we wish to make to the environment to fix it. It is a process of repair.

Taking the plot of land for a house in the northern hemisphere and splitting it such that the house is at the north end so the south-facing wall can have a semi-private, well-lit open area around it is also a process of repair to the current design. It is the right step to take:

  • for the purpose of the land
  • in the context of benefiting from good light when outside the property but still on the land beside the property
  • to use the land well, to its fullest
  • for a northern hemisphere property

Given context and forces, nominalising a function into an object can be the right step. The Strategy pattern describes the process for when you want to solve, for some constraints, the capacity to adjust which function is called without having to go into the existing code again.

The Strategy pattern is the application of a sequence of steps to get to a solution for a problem in a context. It’s why we can still use it in languages like JavaScript, where you can put a function in a variable or override any class methods whenever you feel like it. All prototype languages can use the Strategy pattern because it’s not just the final object-oriented form but an idea of allowing for dynamic behaviour based on state. The concept was limited to object-oriented programming because of the aims of the book in which we find it. The pattern was defined as a way to store functions in objects but, at the core, it’s just runtime dynamic behaviour.

Because the code used by the authors at the time did not have the sophisticated mechanisms available to modern languages, they did not recognise the pattern for what it was. In effect, the Strategy pattern could be interpreted as Last Minute Decision Making. It allows for a function to respond to an event, but the responding function can be decided upon at runtime. It’s a form of dispatch mechanism, so comes with all the related downsides. The problem with not understanding the problem or process driving the pattern is that when new solutions come along, you don’t replace the old inferior solutions in your pattern definitions.

Some people use Strategy or policy objects in languages where they don’t need to, just because they are now the solution accepted by the culture. They have drifted into being an idiom. If you need to call a different method based on some configuration in Python, you can do this:

class MyClass:
    #...
    def configure(self, new_strategy):
        self.do_it = new_strategy

    def do_thing(self):
        self.do_it(self.context_a, self.context_b)

And if you are using C++11 or beyond, you can do this,

class MyClass {
    public:
        using Func=std::function<void(const TypeA&, const TypeB&)>;

        void configure(Func new_strategy) {
            m_do_it = new_strategy;
        }
        void do_thing() const {
            m_do_it(m_contextA, m_contextB);
        }
    private:
        Func m_do_it;
};
Snippet link: https://godbolt.org/z/rd41o1ssT

And even if you are using a language without these features, there are still workable solutions. You can achieve it in C, which almost proves this was never an object-oriented pattern in the first place.

struct MyStruct {
    void(*do_it)(int, int);
    int m_contextA;
    int m_contextB;
};
void configure(MyStruct *ms, void(*new_strategy)(int, int)) {
    ms->do_it = new_strategy;
}
void do_thing(MyStruct *ms) {
    ms->do_it(ms->m_contextA, ms->m_contextB);
}
Snippet link: https://godbolt.org/z/MfMoxTh5T

So, the essence of the pattern (introduce a family of algorithms that can be selected from at runtime) is maintained in these examples, even though each of these implementations is utterly different from anything in the GoF book.

Generative, not descriptive

In the whole body of Christopher Alexander’s work, the slide towards generative processes is gradual but unmistakable. In The Nature of Order, Book 2[NoO2-02], the generative process takes centre stage, binding to the core argument from the first book with great strength. However, even in his early works, his theories seem based on an epiphany over the difference between traditional methods and those of modern architecture.

The architectural theories at the time were descriptive, not prescriptive[Grabow83]. There were very few rules on how to design a building. Mostly, they were rules on how to review a building once constructed. Buildings tended to be similar to those around them out of actively planned consistency rather than because they followed a generative process. The prescriptive rules were all in the domain of engineering. Only the engineers had rules for weights, ratios, and material limits to drive their decisions on what and how to build.

At the beginning of his work, Christopher Alexander suspected this was the cause of the problems in modern architecture; hence, he looked to traditional building processes to find a way to extract this generative nature. However, it would seem Peter Eisenman found the descriptive architecture thesis quite alarming. Eisenman became an internationally acclaimed architect later in life, known for projects such as the Memorial to the Murdered Jews of Europe and the Wexner Center for the Arts. His style is most certainly what one would call modern architecture.

Eisenman countered Christopher Alexander’s work with his theory of modern architecture[Formal63] in 1963, even before Notes on the Synthesis of Form[Notes64] was officially released. When understood, Eisenman’s thesis pushes back against Christopher Alexander’s unspoken complaint that modern architecture was not a generative process and thus was devoid of humanity.

However, even though Eisenman’s thesis argues against Alexander’s implicit conclusion, making it clear Eisenman holds that modern architecture is, in fact, a prescriptive and generative process, it does not quite hit the nail squarely on the head. He side-steps the question of humanity completely, preferring to generate forms with which to achieve artistic meaning, or even no meaning at all.

You can see evidence of Eisenman’s values in his later works. There are often clear signs of a desire to design for the sake of the piece’s meaning. Meaning drives form over value to the people who would use or inhabit his buildings. I see parallels with the architect Corbusier, with his artistically advanced but practically unsound Villa Savoye and the thankfully unrealised Ville Radieuse (Radiant City). Eisenman’s split bedroom1 in House VI is one example of thrusting his intent onto the inhabitants of the building. Another issue with that house is the strange staircase lacking any handrail. As someone who enjoys good design, the quote by Dieter Rams (the famous designer of many Braun products) comes to mind:

Indifference towards people and the reality in which they live is actually the one and only cardinal sin in design

— Dieter Rams2

The problem behind both (Alexander’s and Eisenman’s) arguments is not whether modern architecture was generative but, more critically, what was generating it. This is an important distinction we will go into later. For now, we can take it that both aspects were true. Architects acted as Christopher Alexander proclaimed, actively designing without using a generative process tuned for human values. But Peter Eisenman’s thesis was valid in claiming modern architecture used a generative process.

What interests us is how we ended up with descriptive, not generative, software design patterns. They often describe what was achieved, not how to decide what to do or how to get there. When you want to implement a sorting algorithm, you use the instructions for implementing a specific one. That’s a description of an existing solution; a solution you take and apply to your language. A generative process wouldn’t have a particular solution in mind but would give you the steps to reach a valid resolution of your situation. It would have metrics or rules to verify the solution should hold under routine use.

If someone explains how to write a run-length encoder, that would be a descriptive pattern. It describes a solution you can use in quite a few cases but not how to invent a compression algorithm. But if someone explains information theory to you with examples, showing you how to look for the deeper information in your data and deciding how to encode it, that’s a generative process.

It’s worth noting that science is predominantly generative. The rules of physics, as they are known, allow you to figure out what to do when you try to produce a result. Imagine electrical circuits built by chance and hoping they will work out. Trying to mix chemicals to create new ones without the generative theory behind it is called alchemy, not chemistry. But art forms, too, can have generative rules. The layout of a good photo is reasonably well-known. Colour theory helps direct fine art. Understanding the effect of lighting on a scene improves clarity and can be used to evoke a mood, transferring emotions.

When art forms provide the rules of creation and let you work within them to forge something of value, you are freed by the constraints to concentrate on what you need to do to express yourself fully. This truism appears to have gone unnoticed by the architects who fought against Christopher Alexander’s work.

Patterns were meant to be a set of rules or guidelines on what to think about and when. They were to be used, in sequence, by people when constructing their homes and businesses to generate a good feeling and wholesome building. Patterns were intended to be generative. They are the rules or fabric of construction. They are the material with which we collaborate.

The artist, architect, or director is still the creator when they abide by well-understood patterns. Information, and therefore art and meaning, is the perceptible deviation from expectation. It’s an inspired choice or arrangement of already accepted structures. It’s not the presence of wacky elements (which Alexander called ‘Funky’ design[NoO2-02]) or taking things away without reason (which creates tension by their absence), but the meaningful expression through appropriate selection.

We should follow the patterns chosen by one in the know. Design patterns endow more to those who recognise new ones or see their essence. Those who can perceive new patterns from the random interaction of those building or creating we call the creators of new rules and a new, larger, more powerful language. They didn’t create, though, they saw. They had the acute vision to recognise something coming alive.

Generative processes are extremely important when creating anything of sufficient size. Without patterns, it leads to decision-making from a distance. This is sterile, wasteful, and ultimately alienating to those involved in the construction. Christopher Alexander himself said this from a different perspective:

ALL the well ordered complex systems we know in the world, all those anyway that we view as highly successful, are GENERATED structures, not fabricated structures.

— Christopher Alexander, The Nature of Order, Book 2[NoO2-02], p. 180.

Not just large but all successful, complex, well-ordered systems must be generated. A successful complex system in systems theory is found to only exist if grown from a smaller successful complex system. Success is never all at once. If we build a large complex system in one go, it will invariably fail to do what the designer intended. It will do something, and it might be hard to stop it. Just like a wish from a genie, it will do what you asked, not what you meant. So, we should never create large complex systems wholesale. Always build them out of smaller, successful systems. We must allow the smaller systems to combine and generate their way into larger systems.

1

I have not been able to source a good image to show it, but the main bedroom was designed with a thin window tracing along the roof, down the wall, and along the floor making it effectively impossible to place a double bed.

2

The original source of this quote is unknown, but repeated verbatim across many areas and I believe to be on page 2 of As little design as possible, Phaidon, 2011, by Sophie Lovell.

About resolving forces in a context

A pattern begins with the context and the forces and only then starts to show its value through the constraints of a solution that satisfies them. Contexts don’t always have the same solution when the forces at play differ, and forces don’t always lead to the same solution when operating in different contexts.

Forces are needs, desires, and requirements. Sometimes, a stakeholder’s values impart a force upon a situation. Forces are the stresses and pains of the participants in the environment. They are the drivers of action and the cause of any imbalance. They are the unresolved conflicts or dependencies. In this, patterns rely upon and resolve the relationships between the identifiable parts of a context.

Context is the environment. It’s the situation in which these forces are playing out. It can include a project, the timescale of a solution, or other external limitations. It can be the available actions, such as tools and the principles to which a solution must adhere. In effect, the context is the sum of the rules of physics governing what is possible. It is the set of rules by which you may introduce change.

So, a pattern is a recognition of a set of forces in a context and the steps and techniques leading to solutions balancing those forces. Not a solution, but a guide to reach one. The patterns in A Pattern Language are almost all guidelines referencing things to look out for. They spend as much time on the motivation as they do on the suggested steps. Very few are explicit about what to do but will give limits and contextual metrics for a successfully implemented solution. This is in stark contrast to the patterns in almost all books on software design, which regularly offer a solution or three to show how you can do it but do not so much show you the way to reason yourself towards a solution of your own.

Not all books are like this though. One book I read on compression when I first started out in computer games fed me a diet of information theory and examples. I learned how you might exploit your domain knowledge to reduce the number of bits required to store your data. I internalised much of this wisdom and often view problems through its lens, and not just problems of compression.

Some building patterns are only appropriate within a culture or a climate. In software development, it’s the same. Consider which patterns only matter because of the medium, such as the language. A C++ pattern might be a language feature in Python. A pattern in C might be impossible to implement in Java or Rust. Consider also the applicability of patterns at different scales of each aspect. Does team size or product scale affect which patterns are valuable or available to the project? Does a quality bar or expected product lifetime affect whether safety-related patterns apply? Is maintainability or project budget more important than time to market? Some higher-level patterns resolve some of these forces over others. These are all part of the context.

The patterns of building construction don’t define what a place or thing is made of; instead, they define what kinds of affordances it has to the inhabitants—the actions and interactions that can take place in it.

The design of a spade does not specify metal or wood but a light shaft and harder than the target material blade. It includes size and proportion limits balanced against leverage requirements for a specific user and the target’s toughness. It’s relative; it’s about relationships between elements, constraints, and context. It’s all about how we will be using the tool, the contexts of dirt and available manual labour, and the forces of needing to move material from one place to another with a delicate touch. This is a pattern resolving forces.

When you think in terms of lawns, walls, paths, and trees, these are the pieces by which we complete the pattern. If you instead turn to the interpretation of things as relationships, such as a sunny place where you can be with trees, a terrace on the street, or a garden wall, they are about how things interact to produce something more meaningful. These are the forces. They are what it is for. People’s impressions of a place aren’t descriptions; they are the hopes. And hope is a force.

We should always consider design patterns superior to solutions because they are transformations rather than additions. They do not conflict with other patterns because they preserve and define, not replace or subtract. Why this is so will be explained in the chapter Fundamental Properties.

A language, a hierarchy, a web

This is often the most difficult aspect to comprehend. A pattern is part of a language. What the language is, is of secondary relevance to this fact, but I will cover both. The vital part of this claim is that patterns are part of a structured form with rules and meaning.

Each pattern in a pattern language is a compilation of wisdom for creating a local solution in a context, resolving forces. The context is only sometimes a set of concrete recognisable elements. More often than not, other patterns provide the context. Patterns are hierarchical, but they are also a web or, as said in A City is not a Tree 1[ACinaT65], a semi-lattice. The top-level patterns rely on uncovering a configuration of subordinate patterns that satisfy them. The lower-level patterns need to serve both their neighbours and their parents. A Pattern Language[APL77] provided a set of interconnected, hierarchical patterns that could build a shed, a village, or something as large as a country.

Perhaps this sounds contradictory to the earlier statements of patterns being non-conflicting. Why would a pattern rely on a particular configuration if conflict was already avoided? Well, it’s because patterns do not cause conflict; they resolve conflict and maintain harmony in a system as it grows. But in an existing system with conflict, you must find and map patterns to the present structure. It is the existing discord that requires us to seek a path back to a good configuration. In a generative process, you don’t need to locate these patterns, as they emerge of their own accord. How this happens will be covered further in the chapter Fundamental Properties

We only see the beginnings of these kinds of interconnected hierarchical patterns in Pattern-Oriented Software Architecture[POSA96] and Patterns of Enterprise Application Architecture[PoEAA03]. These books concern themselves with the construction of applications, not just solving code riddles. There are no higher-level or lower-level abstraction patterns in the GoF book[GoF94], and only a few patterns refer to each other as neighbours (such as Interpreter referring to Composite and Visitor). They don’t tie code together as much as they should or talk enough about the adjacency of other patterns. When each pattern is independent of all others, it’s not a language but a menu. When patterns are strongly coupled, you are unlikely to use one without the other, and the tapestry is missing. Patterns are interlinked but not so tightly coupled they cannot exist without specific others.

Each pattern then, depends both on the smaller patterns it contains, and on the larger patterns within which it is contained.

— Christopher Alexander, The Timeless Way of Building[TTWoB79], p. 312.

Again, this might seem somewhat contradictory to the hierarchies of patterns. But let’s step back and see. Patterns are about resolving forces, most often of the needs of the inhabitants of the environment. That environment is the context, often resulting from the application of a higher-level pattern. But it’s not an element of the higher-level pattern. It’s an element of the realisation of a solution within the constraints of the higher-level pattern.

Each one is incomplete, and needs the context of the others, to make sense.

— Christopher Alexander, The Timeless Way of Building[TTWoB79], p. 312.

The higher-level patterns often provide guards against creating unresolvable forces in terminally conflicted contexts. This is why, even though a higher-level pattern might ask for 159 Light on two sides of every room, the sub-patterns could be to do with depth of windowsill (to allow sitting in the window to some extent referenced in 202 Built-in seats and 222 Low sill) or the spacing of pillars on a non-wall, such as is seen in 119 Arcades and 226 Column place. The specific sub-patterns must be selected only after the new contexts have been tentatively accepted. This way, even though patterns depend on each other, they are not coupled. They are given relevance by how they relate.

In this network, the links between the patterns are almost as much a part of the language as the patterns themselves.

— Christopher Alexander, The Timeless Way of Building[TTWoB79], p. 314.

It is indeed, the structure of the network which makes sense of individual patterns, because it anchors them, and helps make them complete.

Each pattern is modified by its position in the language as a whole: according to the links which form the language.

— Christopher Alexander, The Timeless Way of Building[TTWoB79], p. 315.

To be understood

In Christopher Alexander’s work, the call for a pattern language was to provide non-architects with the necessary words to design their own spaces. The hope was that a language was all that was necessary to produce functional, beautiful buildings. The GoF book doesn’t claim to be a language in this sense, suggesting itself that it is not a language by which entire applications could be developed2. It recognises how patterns were seen in architecture but claims only to use the elements of forces and the way the patterns were captured. They miss the dual nature of the term language and downplay the benefits they provide.

It’s the naming aspect that succeeded in software design patterns. There is value in being understood, and Domain Driven Design[DDD04] made Ubiquitous Language a central theme throughout, from the higher-level design to the detailed implementation. Named design patterns allow us to speak to others in shorthand. Not all usefully named things are design patterns, and not all patterns have clear names. But design patterns let us think about pieces of a larger project by defining these ephemeral elements and their boundaries. So, even if they have poorly chosen names, as long as we agree upon those choices, it’s better than if we had not named them at all.

Clarity and hidden languages

Outside of the immediate value in the correctness of code, there’s also maintainability and the ability to talk about features we don’t yet have. The common language element of design patterns is brought up at the beginning of the GoF book, but it’s not explained in detail. In software, words are the elements we can use to build up towards a solution. They can be small or large words. We have low-level words such as lists, trees, and files; mid-level words such as message-bus, protocol, and connection; and feature-level words that are more domain-specific, such as account, inventory, molecule, page, and puzzle. But not all words are nouns. We have transactions and deletions, creation and modification. We have calculation and verification and more. All these words are literal because they exist in the code in the language used during interpretation or compilation. When we all agree on these words, and they match the non-code design and the elements of the world, we start to call them a ubiquitous language.

Guy Steele once gave a talk3 on how programming languages can be large or small. He explained how building up a larger language inside a smaller one is a way to make the language successful and how the ability to add new words provides a place for a community to grow the language into a larger one in practice. In his talk, he indicated that defining a common vocabulary in any source code makes it a larger language and that a larger language is always necessary to construct larger projects.

The existing software design patterns can provide standard terms people can use to comprehend a system. They give names to the organisation of the words without being present (you don’t call your factory pattern implementation RequestProcessorFactoryFactory4.) They provide a way to discuss how the parts work together and how the structure affects that. These meta words allow people to suggest future changes, even if they were not part of the community that built it.

All this means there is the language in which you program: C, Python, Assembly, JavaScript, Rust, or Java. There can be a language one step up, which includes the standard library; this sets the tone for later layers or provides seeds for the formation of the crystals of idioms.

There is also the language in which you write higher-level code built from the small words in the lower-level language. At this level, we’re talking about frameworks for writing larger things. Examples can be found with Express for JavaScript as a set of ways in which things are worded and grammatically combined, which are definitely not required by JavaScript but are required by Express. Game engines are frequently quite strict on how you code for them, and developing against the language rather than with it will lead to poor performance and problems that are difficult to debug.

In Express, Django, Rails, and Unreal Engine, there are core languages. Your framework might use Rust, Haskell, C, or Java, but you can’t just begin programming in a framework by understanding the language in which it is written. Each will have its own set of unique words and grammar.

Beyond the framework language, there will be the language of the project: terms specific but ubiquitous to your application or the team developing it. These will usually be idioms or idiomatic uses of patterns. All these layers need names for their content. Pattern names are just one of many ways to communicate how things are and how they could be.

1

I found this paper re-published on pages 67-84 of Design After Modernism[DAM88], but it’s been published in a few other places too.

3

Growing a Language, OPPSLA 1998, ACM Symposium talk.

2

[GoF94] Chapter 6.3, pg. 357. Alexander’s Pattern Languages.

Disloyal

There’s another kind of pattern. It’s the type of pattern everyone recognises and puts on the naughty list. These are the anti-patterns. The only difference between them and other patterns is that they hurt rather than help you. Anti-patterns are natural patterns that cause you problems. It’s helpful to recognise them quickly and find details on an alternative, a replacement pattern you want to use, or at least a method by which to extract yourself from the anti-pattern. If regular design patterns are wisdom, and wisdom is what you should avoid doing, anti-patterns are the design patterns of what makes sense at the time but you really shouldn’t do.

There are other sub-categories on the negative continuum. From patterns that are good but have some warnings to those that aren’t self-forming, but we genuinely want them to be. This is why I use the term disloyal. Patterns don’t care about humans, buildings, code, or customers. They just exist. Sometimes, they hurt by design; sometimes, they are extremely rewarding in a bad way, like chocolate cake.

The pattern of building large sweeping corners to reduce the chance of clipping the curb causes fewer bent wheel rims. But it increases pedestrian fatalities because visibility drops, time to cross lengthens, and cornering speeds increase. The pattern was self-reinforcing and self-forming, but ultimately, it’s an anti-pattern.

The pattern of building more auto-complete technology into IDEs is an obvious benefit to people wanting to code faster without looking up the API. But it comes at the cost of a shallow understanding of APIs and the increasing length of names without an equal increase in clarity of meaning.

The point is that patterns exist whether you benefit from them or not. Making anti-patterns visible is the first step to undoing any damage they cause.

Emergent, not idiomatic

When asked for a one-sentence characterisation of a pattern, I will usually reel off something like: ‘A pattern is a self-reinforcing configuration formed as a response to a set of forces in a context, whether for ill or good’. The word solution is absent, and I call out how the final form is not necessarily advantageous for those involved. These are important distinctions because so much of the design pattern movement has hinged on them all being solutions of some form, and little is understood about their effects on the health of their hosts.

The aspect of patterns always being positive is something that people might take for granted. Patterns aren’t good; they are just that self-forming, self-reinforcing aspect. Otherwise, we wouldn’t have anti-patterns. Or, at least not in the form I think of them now.

Notably, the value of a pattern can change over time, just as a species can be effective in one environment and mediocre in another. A pattern can be healthy in one context and unhealthy in another. The only constant is their self-forming, self-reinforcing, or autopoietic nature in the presence of forces in a context.

This emergent nature is not only fundamental to them not being idiomatic but also part of the verification process carried out by Christopher Alexander. Good patterns emerge. You find them by looking for how to resolve forces. When you successfully resolve them in a context for the first time, you often find a pattern has been staring you in the face throughout your endeavour. You may also find you have reinvented something. This is because common problems, when resolved fully, tend to have solutions with lots of common properties. And that’s the essence of a design pattern.

A pattern recurs because of the common properties of a problem. But some problems seem vague. Others only require those involved to agree. Here, an idiom suffices, something we accept as true. It is not self-forming or naturally occurring, but forming an agreement is natural. Idioms are always the things we have to agree on to make progress because a wrong answer is difficult to judge. They can often be completely unfalsifiable, such as the tabs vs. spaces arguments or bracing style wars.

Idioms contain inertia. Patterns, being emergent, collapse when the situation changes. Some idioms fight back. They don’t allow themselves to change in response to new data. They repeat again and again the same not quite right solution. They are a cultural phenomenon. An idiom is a convention. Conventions are difficult to repair, as there’s usually a lot of personal investment and apparent value built on these imperfect foundations.

Patterns emerge by themselves; we agree upon idioms. Because of this, idioms are less likely to be detrimental when they first arrive. We judged, balanced, and consciously accepted our idioms at some point. Patterns result from forces, and the solution gravity well forms around what resolves them. But that means the solution does not need to be judged by anyone before it becomes the norm.

In effect, anti-patterns exist because they satisfy the pattern, not the entities involved. The entities involved might have opinions about the solution’s value, but that’s a different context from that of the pattern. This means that a pattern can become an anti-pattern, and an anti-pattern can become a pattern when we judge it from a different context.

Found, not invented

In The Oregon Experiment[TOE75], Christopher Alexander reiterates what he means by patterns. He adds some nuance to the term and captures more of the value in the context.

With this in mind, we may define a pattern as any general planning principle, which states a clear problem that may occur repeatedly in the environment, states the range of contexts in which this problem will occur, and gives the general features required by all buildings or plans which solve this problem.

— Christopher Alexander, The Oregon Experiment[TOE75], pp. 101–102.

Notice how he claims a pattern states some invariants of a solution. The definition provided here is different because it does not specify the features of the solution to a problem but instead specifies the features of any solution to a clear problem. This indicates many possible solutions. The phrasing guides us to expect a list of criteria any good solution must have. It also points out that our problems need to be clearly defined. Part of the work of the ‘forces’ definition is to provide details on the problem’s root cause.

This is why it’s important to find patterns rather than invent them. An invention is a solution to a problem. Patterns are about solutions to recurring problems, which means you can’t invent a pattern, as you don’t have the same problem over and over again. You solve it once given the problem you face. You solve the same problem the next time by adjusting your solution. This means one person can’t realistically find patterns from their own work. They are inherently unable to verify whether the emergence of the solution is recurrent. It’s possible this nullifies many patterns that claim to have three instances, as the cases might be sequential and not distinct.

This denial of invention was officially recognised by the pattern movement, if not practically carried out.

[…] one thing that has distinguished the patterns community has been its aggressive disregard for originality.

— Brian Foote, Pattern Languages of Program Design 3[PLoPD3-97], p. x.

The later books hold up in that they are better curated and contain more patterns of the found variety. But why was it essential to make this statement in the first place? I suppose it was because some community members realised this was a systemic problem hurting the movement. What’s strange to me is how the pattern movement was so strongly related to systems theory, yet it took that long to realise the need to fight back against this detrimental feedback loop.

However, some patterns do feel like inventions: mathematics, bows and arrows, weaving, and the eye. They don’t seem to emerge from the forces naturally. As mentioned, these are complicated solutions, like epiphanies or inventions. But they keep occurring over and over. This is because when you have a set of forces and people working in them, trying to resolve them, or even natural processes such as natural selection, a solution will be attractive, even if it’s not obvious to those absent at the moment of invention.

These inevitable inventions are evocative of pits in a training landscape for AI, where this good fit has little to no gradient towards it. Nonetheless, the solution is a strong attractor around this inconspicuous local minimum. A tiny divot of success.

How Systems Theory Applies

Systems theory is the study of dynamic relationships, of how systems react and their emergent behaviour. It explains how they form and how they fight back against unwanted changes. Systems have senses, reactions, and an organisation that determines whether they are reacting or in the process of destruction.

Systems theory applies to pattern theory in that patterns are self-constructing systems. I was surprised by how close Christopher Alexander’s theories grew to systems theory over time. Many aspects of The Nature of Order centre around the way systems interact.

In his last book, The Battle for the Life and Beauty of the Earth[Battle12], Christopher Alexander declares two ways of constructing: system A and system B. It’s unclear whether he was referring to systems theory when analysing the fighting between the systems, but his records align with the expectations of a systems theory approach to understanding why things panned out the way they did.

We will start with a primer on systems theory concepts and then move on to learn how recognising policy resistance can help us. Once primed with this knowledge, we can move to new heuristics for recognising whether code is good and find a new definition of good that might upset some people. But at least we can now be on the lookout for bad code to love instead.

Then, we finish the chapter by reviewing other forms of feedback affecting the design pattern movement, anti-patterns, and other pattern forms. Armed with this new lens of systems theory, we can better understand the later chapters and specifically, approach a new understanding of why patterns failed the way they did and what we can do about it.

Senses of systems

Systems react to input all the time. It’s what makes them systems. A pendulum swings under the input of pressure. A seed germinates when the moisture level and temperature are just right. A developer screams when they discover the delivery date has been brought forward by a few weeks.

Systems react because they have senses. Not always literal senses in the form of sight or hearing with eyes or ears but senses nonetheless. When systems react, we must assume they are responding to something that they became conscious of. Reactions are proof that a system has sensed an event. A pendulum does not have a sense of pain, but if we hit it, it does react. This is what we mean by senses. They aren’t senses in a literal interpretation but the ability to respond to a stimulus.

A counterexample might help. A pendulum does not have a sense of temperature. It is not affected by it. The seed senses temperature and reacts. A pendulum does not work well when inverted, so in effect, it perceives the inversion as its behaviour changes. However, the orientation of a seed has minimal impact on it. And, whether you raise their temperature or invert them, a software developer will react to the event and can even tell the difference between the two.

This is what we are talking about here. Senses of systems: not so much a conscious awareness of the outside world, but the ability to be affected by the environment and phenomena. We also want to examine our awareness of these senses and how we can influence them to improve our situation. But first, we need to think a bit deeper about what a phenomenon is.

Phenomena

A team working on a project is a system. We can map the inputs and outputs to responses and behaviours. One such behaviour is the reaction to news about the project’s status. Even though the team is made of humans, they cannot see all the aspects of the project without taking advantage of the right tools to view it. The news about the status is input, that is, phenomena the team can react to.

If you think about our customs for observing progress, you will realise none of the indicators are visible from the outside. The metrics all require you to collect and collate data before estimating. And that’s when you’re working on a project where you deliver early and often. The visibility of progress on large projects with up-front design is low. Progress isn’t a series of phenomena; even if it were, you’d still only know how many there were once you were done. It’s foolish to assume you can come to a reasonable conclusion about the genuine progress made towards a distant goal. In 2021, I believed I would finish writing this book in 2022.

Estimating or calculating the amount of effort put in is easy. You can easily count how many people are currently working. You can calculate hours spent on a project. You can see how many tickets are closed. The measure of effort is evidence-based. You can measure it in cash spent on remuneration if you wish. Even if some people are slow and others fast, the overall effort is measurable.

But progress is only loosely related to effort. Therefore, using effort as an indicator means we don’t have the correct feedback to learn how to progress more effectively while we’re in the middle of the work. We need the right tools to be aware of invisible project attributes such as progress, but we also need tools and lenses through which we will look to gather other information relevant to our work. These tools should provide us with phenomena to which we can react.

A lack of phenomena can be considered a form of blindness. Using ‘can we ship it?’ as our metric creates an easy-to-use measure for progress. It’s a very rough one, but it’s still something. We’re no longer blind. When you estimate you are ninety per cent of the way through a project but don’t believe there is a product hiding in what you have produced so far, it makes for an anxious sense of urgency. That urgency is only present if there are phenomena to sense.

Consider also the types of problems you might like to have, such as too many users to support. Your network goes down because you are at a thousand per cent capacity; more people are using your service than you expected. When you fail to track these things, they can become problems you wish you didn’t have. Why didn’t you react to the situation before it became a problem? It’s probably due to the lack of a phenomenon. If your servers running payment processing go down due to a deluge of people trying to buy from you, it’s a good sign but a bad state to be in. You’ve got a lot of phenomena to work with, but it’s too late to rake in the cash.

Then there’s the blindness to slow changes—the frog in a boiler syndrome. You can’t react to something that changes so slowly that you don’t notice when things get far out of hand. This blindness is sometimes referred to as a shifting baseline. The idea is that local recent deviations are so minor as to cause no ripples or resistance. Then, those measurements become the new baseline against which you compare future data. Slowly, the baseline moves into the zone that would previously have triggered a response. It’s the reason why we sometimes leave it too late when it comes to earthquake and volcano warning systems. The phenomena indicating the future explosion hide in plain sight by moving so slowly that they do not trigger any senses.

When you monitor the right things, you can predict any of these situations before they happen. But when you don’t expect something, you don’t watch for it, so you cannot know about it. By default, all the things we think are impossible are invisible to us and, in turn, we do not worry about them. So, even though we don’t want to be blind, sometimes we don’t even know what we can’t see.

Perspective

What is possible to see is limited by our perspective and our discernment abilities. Our perspective is what we care about and our position. Our power of discernment is what we are attuned to pick up on among the soup of noisy inputs around us. This is the limit of what we can observe. Specialists will usually acquire deep powers of discernment in a specific field, some of which they will not even recognise having. But they can sometimes be utterly blind to systemic phenomena due to their perspective from inside the context of their speciality.

Let’s start by describing perspective with some examples based on views from different domains. There are many examples of words that mean something different even though they sound and are spelt the same way, such as read, arm, band, current, lie, dice, the list goes on and on, and that’s just the ones I can think of quickly. We’re not talking about homonyms here, but rather things that are the same but are seen differently from a different perspective. Those who know about Domain-Driven Design may think of these as Bounded Context.

  • Book:
    • A librarian may see Dewey Decimal.
    • A borrower sees a copy of a title.
    • A publisher sees a stock and profit SKU.
  • Newspaper:
    • The reader sees journalists/bias or a physical copy.
    • The newsagent sees shelf space, cost, schedule and popularity.
  • Bus:
    • The maintenance team sees the vehicle.
    • A potential passenger sees a number or a route.
  • Plane:
    • The passenger will get on a flight to a destination.
    • The pilots fly a plane on a route.
    • The airline sees an asset with costs and benefits.

It is only because we all have noses, ears, and eyes that we can say that any one person’s nose, ears, or eyes are distinctive. Tattoos are rarely distinctive because, in cultures and societies where they are rare, the presence of a tattoo is distinctive in itself, relegating the detail of the tattoo to second place. In societies where tattoos are not uncommon, each tattoo will be relatively unique. And if they are all unique, then none are. Distinct tattoos only exist where tattoos are common, but they have a typicality, such as size, placement, colour or design; hence, I tend to notice facial or full-body tattoos, but not others.

We are adapted to recognising slight variations from a well-known hierarchy of features rather than the features themselves. Ask someone to draw someone’s face, and they pick out the variations better than the invariants. They pick out prominent features or scars better than the basic ratios. This is why artists have to study forms and physiology. Without explicit study, accurately drawing natural features is more demanding. Only when an artist has those core features down can they introduce the variations to achieve what we consider to be likenesses or caricatures.

Systems only see the signal. They ignore the expected. They have no need for it; it is not information. This is why artists strive to learn how to make things look natural. They need everything their art is not meant to be about to vanish. They need to be accurate because otherwise, the wrong things become that which is seen.

Responses

Systems respond by taking action. Sometimes, the response is incredibly simple, such as collecting the input into some reservoir. The system monitors changes in this quantity, and once it exhibits the phenomenon of reaching a threshold, triggers further action.

In a deer scarer, the water fills the bamboo, which acts as a reservoir. The system monitors the balancing point for the weight of the water against that of the closed end by the pivot mechanism acting as a scale. The overbalance triggers an action of tipping, which pours the water out and then swings the balance back the other way. All the steps are simple, but it is a system that responds to input and produces phenomena, triggering action.

DeerScarer

It’s always true that reactions happen because something was sensed. Without the initiating sensory stimulation, there cannot be a reaction. Perhaps you think that’s false because you’re wondering about when we detect nothing happening. Well, consider this: When do we notice a lack of activity? Is it arbitrary, or do we have an expectation of when something should have happened? We have a tolerance threshold, and we’re triggered to react when we reach it. A mechanical or biological timer ran its course and became a phenomenon of its own, like a screensaver.

In systems theory, we often talk about stocks and flows. These are the analogue equivalent of the phenomena we mentioned in senses. They aren’t always acted upon directly, but sometimes they are. For example, let’s consider a mundane system. A cistern allows the flow of liquid until it is at capacity. It has a water level, which is a stock, and a soft trigger to allow fluid to gush from the inlet, which is the flow.

A cistern maintains a stock of fluid by reacting to the current level. It’s an elementary feedback loop. The phenomena are analogue, not digital like those we considered before. The reaction is also analogue in many cisterns. The response does not need to be linear, but must be a function of an input. Reactions can be measured and proportional, like how many months you get for embezzling company funds.

Cistern

Another analogue stock we encounter in software development is deployment pipeline times. As they grow, we reflect an analogue level of annoyance at them. At some point, they trigger a flow of cash to buy new hardware or developer time to fix the build process to make it less intrusive. In some systems, the level at which the cost is acceptable is set low. These are better environments for developers to work, as build times never get out of hand. In organisations where fixing the build time is never addressed, there is an effect on a different analogue flow—one measured on the scale of employee retention rate.

BuildTimes

The stocks of any system are wide and varied. In a development organisation, you have staff with varying skill levels, varying productivity, and varying levels of awareness of the organisation’s culture. You have capital in the software and hardware you buy and use. You have a back catalogue of products you can continue to sell without incurring further costs. You have goodwill with other companies, academia, the public, developers outside, and developers inside. You also have money.

The flows of any system are just as varied. You train a certain number of people for a given number of hours per month or year. You hire at a rate and fire or lose employees at a rate. You even promote at a rate. You ship a certain number of features or products per cycle and are sued a number of times a year. You rent, pay wages, bribe officials, and pay the electricity bill. All these flows affect multiple stocks at once. Some are important, others less so.

Reactions will affect stocks, and stocks will affect or trigger reactions. This is the core of what systems are and do. The stocks and flows wrap around and stimulate themselves to act again and again in a loop. What’s unexpected is the types of behaviour they exhibit when chained together in complex ways.

Feedback

The combination of senses and responses is the basis for the core aspect of systems theory. Feedback emerges from these connections and is worth studying as an element in its own right. Feedback makes a system handle input and adjust rather than do what it’s told. It might be easier to explain this with a counterexample. When you plug the bath and run the tap, you are filling the liquid stock. There is no feedback, so the bath will fill at a constant rate until it overflows. If you turn off the tap and remove the plug, the bath will empty. If you want to reduce the water level to a particular height, you must monitor it and stop its flow once the specified height is achieved. Again, this bath example is not a system because it does not act in response to anything (well, perhaps except for the overflowing). It has to be manipulated directly.

The cistern example is a system that maintains a liquid level. This system reacts to the water level dropping, and increases flow. We know it senses because we can detect it responding to input. There is a negative feedback loop. As the gap between the current liquid level and the desired level increases, the flow increases proportionally to reverse the situation. In a steam engine, a governor releases steam to lower the pressure and stop the machine from running too fast. A kettle boils until the bimetallic strip bends enough to switch off the power to the heating element.

Most of these systems are systems of equilibrium. We call these negative feedback systems, as they are self-righting. Their design pushes back against increasing or decreasing stock by altering one or more flows to return it to an equilibrium point. Systems can have positive feedback loops, but these are generally unstable and unsustainable. For example, over-fishing reduces fish stocks, which leads to fewer new fish, which means any fishing is now over-fishing. The high-pitched sound of a microphone too close to its speaker is caused by the amplifier amplifying the sound it just played over a loudspeaker. The amount of ice on Earth affects the ability to reflect the sun’s rays; as it melts, the amount of heat captured by the land and oceans increases, increasing the average temperature of the planet and reducing the amount of ice on Earth. Positive feedback loops tend to cause runaway behaviour where the response needs to be quickly halted before it becomes uncontrollable.

So much of our lives is driven by the concept of feedback. Evolution is a feedback mechanism, and so is Scrum. Systems theory is all about feedback and the manner in which things react to each other. Christopher Alexander’s work crashed into feedback loops multiple times. At the core of all living processes lie mechanisms to create phenomena, sense them, and adjust future behaviour in response.

What does developer feedback look like? How about promotions, raises, bonuses, and other forms of reward? They are feedback, so how does a developer respond? Well, only rarely in the way the person inventing the reward mechanism intended. As mentioned before, a complex system responds as designed, not as intended. It follows the path of least resistance, fulfilling the wish as deviously as possible to gain the reward without expending any unnecessary effort.

When you praise or promote programmers for creating new features and rolling them out, you create a culture of making new things and ignoring maintenance issues. Old features are dropped in favour of bonus- or promotion-worthy work. If, instead, you reward programmers for fixing bugs, they spend less time protecting the code against future bugs and more time developing it in a way that bugs are minor but noticeable and easy to repair. Feedback is a powerful tool but it can create strange situations you don’t want.

What about code? Code has feedback too. When code is difficult to write and debug, it becomes hardened to change. Feedback from the code informs the developer it’s unsafe to make changes. Therefore, it stays unchanged. This makes it even less safe to change, reinforcing stasis until replacing it wholesale, cutting it out like an unwanted growth. Code that is frequently changed due to many small bugs gets changed a lot in the future. People will suspect it is the cause of new bugs. Without good tests, any change for the better will likely introduce new bugs, as the code was previously proven to be ripe for side effects. It becomes a self-fulfilling prophecy, which is a form of positive feedback loop.

Motivation

Complete builds take longer, so integration takes longer, so people only attempt a final build once a week. This positive feedback loop can snowball as the reaction to long builds makes them longer. A weekly build means there is more to integrate, so that begins to take more than a day, so you make it fortnightly. Now there’s more to integrate, so builds take a few days to prepare, so you make it monthly, and on it goes until you have annual releases.

Client feedback on a project, if it’s overly negative or personal, will not be taken gracefully. This leads to wanting feedback less often, so you wait for longer between feedback meetings. The longer the gap between sessions, the more you invest before determining whether you took the right direction. The more changes they review, the higher the probability of a mistake of judgement or interpretation of your client’s needs. This will displease them, leading to worse discussions—another negativity-based positive feedback loop.

Positive feedback loops need not always be about bad things. When you pick up a new tool or library, your work improves. This makes you feel good about picking up new tools, so you do it more. The loop is positive and beneficial.

Systems theory gives us tools for how to think about motivating people. Motivation by reward is a form of feedback loop. You are motivated by the reward to do a thing; your response is to do the thing. If I want you to do a thing, how do I reward you so that you do what I want you to do? Rewards and punishment work well for some limited situations. However, research into what drives people to do great work has proven this type of motivation to be complicated and fraught with dangerous consequences if done wrong.

Paying people to work only succeeds to a certain extent, and trying to get more out of people by promising bonuses and extra rewards beyond what they need tends to backfire. Frederick Herzberg’s two-factor theory of workplace satisfaction suggests that money and other extrinsic motivators only work as negative motivators: if it’s missing, it acts as a disincentive. It doesn’t act as an incentive to work, but a lack of it is a reason to down tools. Money is a requirement, not a bonus.

The other problem with extrinsic motivators is how they generally don’t get people to act honestly. When you reward people with money for doing something, and they want the money, they can tend towards meeting the letter of the agreement over the intent of the deal—just as we saw with building contractors using the cheapest materials within design tolerance.

Perverse incentives

If you want to get something done, you’re better off explaining the value in the result and leaving people to solve the problem for themselves. Telling people what to do to solve your problem and rewarding them for doing what you tell them turns them into robots, and they act with as much emotional commitment. Consider the case of perverse incentives, sometimes called the Cobra effect.

There is a famous myth1 where the British Raj in India was not pleased with the number of cobras around, so they decided to cull their numbers. The British, being the British, made two mistakes typical of their heritage. The first was to underestimate the capacity of thought of anyone not from Britain.

Rather than do the culling themselves, the Raj decided to outsource the problem. Instead of asking the local population to decrease the snake population using their initiative and rewarding them appropriately, they offered a cash incentive for bringing dead cobras to the Raj. The idea was to reduce the number of cobras by buying them until none were left.

However, people are smart. When told they could get paid for bringing cobras to the Raj, the locals decided upon the wonderfully counterproductive plan of breeding cobras specifically for selling to the Raj. And so, the natural cobra population stabilised. In addition, the Raj paid more each day with no visible impact. This went on for some time until either someone noticed what was happening or the Raj thought the approach wasn’t working, so they gave up and stopped paying for the cobras.

But that wasn’t the worst of it. The second mistake the British made was believing their decisions had no lasting impact. When they reversed their rules, they assumed things would go back to the way they were before. But the environment had changed. There were now quite a few cobra farms. When the Raj stopped buying cobras, the farms had no reason to keep operating, so they closed down. Rather than wasting time killing the cobras, the farmers stopped working the snake farms and set the cobras free. In effect, the Raj had accidentally paid to increase the population of cobras in the region.

The veracity of the story notwithstanding, the problem with reward mechanisms of this kind is that they do not consider how systems reconfigure themselves to take advantage of any newly introduced responses to stocks and flows.

Misaligned goals

Another problem can be when you try to improve things but forget your core goal. The Raj didn’t want cobras. Rewarding their presence in any location wasn’t aligned with their goal. When you want to remove something, look for ways to reinforce removal, not movement or arrival in another location. When you want to gain something, look for ways to reinforce increase, not borrowing, stealing, or having more than another, as this last example often leads to destruction.

An example in living memory for many programmers will be the Hacktoberfest T-shirt debacle. People were to be rewarded for making pull requests, not for improving the repositories. The co-ordinators of the event did not foresee the impact. Updates to README.md files and actively dangerous changes were suggested, all in the name of gaining a branded item of clothing. The sudden influx of low-quality pull-requests imposed additional work on many open-source projects’ maintainers.

If you reward programmers based on the number of lines of code they write, they will produce a lot of lines of code. Suppose you reward testers on the number of bugs they find. In that case, they will generate multiple bugs for the same issue and prioritise the types of work where entering new bugs happens more frequently over finding more difficult-to-notice bugs or regressing existing bugs. Just like the police, who have an arrest quota, might decide to patrol neighbourhoods known for higher crime levels so that they have a better chance of making their numbers. You don’t want testers to behave like the police of your application.

If you reward your programmers for fixing bugs, you might even notice them creating bugs so they can resolve them or closing bugs as fixed when they weren’t sure because fixing ten bugs at a 50% chance of success is better than solving three for sure as far as rewards go. That’s without entertaining the problem of programmer overconfidence in their debugging and fixing skills.

Align incentives

In summary, feedback is a powerful tool you must always keep in mind when looking to write policies or tactics for your strategies. A poorly conceived policy can backfire, and a misaligned tactic can worsen things in the long run.

  • Look to the core goal of your strategy and make sure that your incentives align with it.
  • Give the job of solving the problem to the people at the place where it can be resolved.
  • Make your goal transparent, and let others know why you desire it and what problem it solves.
  • Give others the ability to see the quality they bring by solving your problem well.

These actions allow autonomy and open the path to a sense of achievement and mastery of the problem. So, incentivise solving your problem, not exploiting the wording of your command. There are many ways to reward people for achieving your goal that require almost no investment on your part. For example, many people don’t want to merely solve a problem; they also want to tell others how awesome they are at doing it.

1

The famous (and apparently false) tale appears to have originated as an anecdote from Der Kobra Effekt, by Horst Sieber. I have not read it but heard the tale from multiple other sources.

Emergent behaviour

Emergent behaviour is the source of most pain when dealing with systems. However, emergent behaviour is the only value in building systems. Without it, they are just processes, and processes are inert. All complex achievements are the success of systems. Complexity relies on interaction—the reaction to a stimulus. This establishes that all complex achievements squarely reside in the land of systems. So, we must understand the emergent behaviour of a system to be sure it does what we intend and not what we don’t.

When we think of patterns, we must not think of them as raw materials or thought-free processes. They are never modules we slap onto an existing system. They are organisations of relationships between things, between the raw materials at worst.

It’s normal for something not to be its raw materials. We can identify things as their configuration. They can be the pattern of how they move or their path of growth or reactions. A whirlpool is not just water; it is the form and energy of it. Otherwise, you could capture it in a bucket and take it somewhere else.

Forests and trees

Which bit of the following code does the sorting?

for i = last; i > first; --i:  # 1: loop over everything
   for j = 1; j < i; ++j:       # and loop inside that loop
      if val[j - 1] > val[j]:    # 2: check if we are ordered
         temp = val[j]            # 3: cache a value
         val[j] = val[j - 1]      # and copy a value
         val[j-1] = temp          # and overwrite with the cache

When we dig too deep, we deconstruct too far. We cannot see which line of code does the sorting because we are too narrowly focused. There is a flow on the outside that defines what we iterate over, but that does not sort things. A check identifies whether there is disorder, but that does not sort things. Three lines swap the order of things, but they would bring ruin if the contents were already in order.

These parts of the algorithm are the Structure in section 1, the Sensor in section 2, and the Reaction in section 3. It can sort two elements using only the Sensor and Reaction. For more, we must introduce Structure. But notice it can only do something when it has a Sensor. If it acts without observing, it is not a system. As soon as you have a structure with an observation and a resultant action, it is a system.

Patterns are forests in this way because they explain how a living collection works together to create, maintain, and heal. There’s less healing in programming, but repair is still important and shows why patterns and agile, iterative approaches are related. The observer in patterns is the programmer, observing the interplay between the code as it is and the unfulfilled requirements. The programmer reviews, decides, and acts. The pattern defines the observations you must complete to determine which of its steps to take and where.

Code that survives is good at surviving

Good code survives because it creates an environment in which programmers survive. The environment is important. Software development is an ecosystem. This is because programmers inhabit codebases, but there are other competing inhabitants too. The things that survive in an environment are the replicating entities or the entities that spread and grow and their hosts. Whatever meets these criteria, whether biological or memetic, counts as an inhabitant in the environment.

Code

Code survives, or at least configurations of code do. Sequences of code can be copied, altered, deleted, or stay. These sequences and relations are the thing that is replicated with an opportunity to change. Thus, we can call them memes in the original sense. Some of the structure survives the copying process, and we give names to that which survives to enable discussion. When we name a thing, its chances of survival go up because it becomes a bounded concept. We document protocols, not sets of ten lines of operations. We write comments for functions and procedures, not every 100 bytes of machine code. We capture the recognisable patterns and how the sequences interact to bring about a change. These named features will be carried through the copying and mutation steps.

Conversely, it’s not a great idea to assume code that survives is good for developers and clients. After all, anti-patterns exist. The attributes allowing code to survive don’t match what we normally call good or valuable code. Unfortunately for us, code that survives doesn’t equate to high-performance code. Just because code is static doesn’t mean it’s correct or even provides users with value. No, indeed, code that survives has attributes perfectly tuned to make the code survive. That is all. It’s good, but not good for us.

Beyond surviving, some code gathers momentum and picks up programmers or codebases in which to survive. Where other code stays put and lingers in only one codebase, some code drags in other libraries or ways of thinking and becomes even more robust. They bring an environment with them. They infect minds with their values. This ability to spread to new hosts makes a piece of code viral. It’s different again from a pattern in that it’s not necessarily a self-forming system or structure of relationships. But when it does spontaneously form, it maintains its idioms very strongly.

There are a few recognisable attributes that make code inviting to a host. Here are a few that stand out.

  • Solves your problem:
    • It’s easy to see the value of the code—what problems it solves.
    • You can tell it will solve a real problem you are having right now.
  • Unambiguous:
    • Is is easy to understand what the code intends to do.
    • Examples show how it works with glue code.
    • It is something you can touch; you can see the source.
  • Highly rated:
    • There is a community of positive comments.
    • Products made with it show good results.
    • ‘Not getting fired for buying IBM.’
  • Safe:
    • There are no known issues with the library.
    • The community has no major outstanding concerns or caveats.
    • The license is compatible with your use case.
  • Slot-in replacement:
    • It is easy to add to an existing codebase.
    • It could even be part of the language in the near future.
    • It could be as simple as copy-pasting1 it in.
  • Gentle start:
    • It has easy-to-read documentation or is designed so that documentation is unnecessary.
    • The API is quick to get started with, and defaults are sensible.

I am torn to mention other things experienced developers care about, and remember, not everyone considers these. Even though they make a library valuable to someone with experience, lacking these won’t immediately make it less likely to go viral. The inexperienced outnumber the experienced. It’s a quality of the environment, and therefore part of the situation we find ourselves in.

  • Deep on inspection:
    • A small API with extra depth and options hidden under the surface shows care about someone needing to refine their use later in development.
    • A shallow API with everything completely hidden will become a burden later on in a project, so it will repel experienced programmers.
  • Clear error messages: or good logging features.
    • Anything that shows the library developer knows terrible things happen and understands people are going to want to figure out what they were and why.
    • Scripting languages that are perfect in every way but don’t have debug output or breakpointing built-in become dangerous time-sinks.
  • Awareness or compatibility with more platforms:
    • It’s not only that the code will be compatible when there is a shift to a different platform but also that a multi-platform library is less likely to have structural issues limiting its usage to single development environments.
  • Lots of examples of usage:
    • Documentation is one thing, but a suite of tests and examples
      • shows you how things are done, and
      • proves things are working.

Vestigial organs

Much like our appendix or tailbone, codebases can carry code that no longer serves a purpose. As long as it isn’t hurting anyone and remains out of sight, it can quietly take up compilation time and hard drive space without being disturbed. We carry around code that doesn’t work for several reasons. Sometimes, it’s physical, such as when the code inhabits the same files or folders as other code we actively use. Sometimes, it’s historic code for which there is little reason to inspect how or whether it works. Any old code we didn’t delete when it ceased being necessary will remain for months or years.

Getting rid of unused code can be difficult. If you don’t include unit tests, it is possible to use code coverage tools to see what is used in your codebase. You might use higher-level tests, such as integration or end-to-end tests, to deduce what code isn’t really required. But most developers don’t think to do this kind of proactive work to discover unnecessary code. In some cases, it’s impossible because your project might be a library for others, and you cannot run their code to review usage.

There are cases of faulty code being carried along as safe and trustworthy. When a library has proven, through positive evidence to be valuable and safe, vestigial faulty code can hide in less-travelled places. An otherwise safe library can be rendered a security risk when a route through the code previously untested becomes a new hot spot. High code coverage can help here, but nothing will protect you against incorrectly written tests or tests proving the code matches an incorrect design.

We take code with us from place to place for many reasons, but the two that crop up the most are these.

  • Old code is comfortable. A worse but familiar API is better than a new one for those not fed up with it.
  • Cleaning up before bringing it over is work. This means untested but existing code is preferable to new or unknown code with many tests.

You can see the danger in this. We trust the old code we have dragged from project to project, but when we finally start to use it, we find bugs and missing features. Individual bugs don’t feel like a significant cost. But together, an old codebase can contain hundreds or thousands of traps for an unwary developer.

Paradigms and mental models

Paradigms leak into how programmers think about the codebase and spread without impediment. When code pushes against an entrenched paradigm, it can be rejected during code review as not aligned. Familiarity is very comforting to many programmers. Unfamiliar code doesn’t have to be wrong for us to reject it. It’s a clash where we equate foreign-to-our-thinking with being technically wrong.

We should define wrong code as producing incorrect behaviour for given inputs. But when a reviewer claims code is only correct if it follows style and paradigm, then our code isn’t correct, even if it would add value to the customer. So, we must understand the environmental aspect again. Correct code, as in functionally correct, is only correct regarding the spec, even with comments explaining the reason why the change was made. Code exists in the environment but also creates one in which programmers will make further changes.

If the new, technically better, unaligned code makes it into the codebase, it creates tension. Someone may come along and clean it up back to the original paradigm, possibly breaking it, even if it was objectively better left untouched. I have seen some programmers reverting valuable optimisations because they did not see the value in the change or preferred the paradigm-aligned code over fast or flexible code. The spec is important, but you must also be aware of the environment in which the code resides.

Idioms and processes

Traditions spread too. This is less about a paradigm than about a habit or a technique. Idioms can stick around beyond their value until they become detrimental, and then it can take a while for people to break the habit of reintroducing them.

Getting C programmers to program C++ idiomatically takes a lot of work. Helping them migrate away from raw pointers for objects and moving to shared pointers and value types has been difficult. Trying to get modern C++ programmers to stop using for loops and prefer using algorithms for performance and correctness has been difficult due to the infectious nature of raw loops in code.

Politics

If some code was hard to write, we naturally feel it has value. The feeling can be pretty strong. Deletion of defective code becomes politically tricky. No matter how well your source control works, some people don’t want to see working code removed from the main codebase. This is an anti-pattern because unused code has no value, and any code in a codebase limits what code you can add and how you can change the code. Delete the dead code.

Complex code also hangs around longer because people fear the sunk cost. Even if the complicated code isn’t used, the fear of writing it again is too intense for some to bear. People fear the cost of rewriting complicated code but for some reason don’t fear maintenance or knock-on costs as much as they should. Bad code exploits this as a survival trait. The more difficult it looks to write, the more likely it will get to stay around. Not only that, but it breeds too. A programmer surrounded by complicated-looking code is inclined to add more complicated code. The opposite may also be true. In line with the broken windows theory, simple, clean, good code reinforces good practices by its presence, but any dirty code lowers the bar for what is acceptable.

The most complicated, unused, slightly wrong, poorly performing, untested code can hang around for a long time if it was difficult to develop, especially if the original author is still part of the organisation and even more so if they are now a senior staff member. It’s not uncommon to find architects inventing or at least naturally including a pet project they worked on when they were a senior developer. That which is familiar is simple to them. Trying to prise an implementation detail away from an architect can be a dangerous political game. So long as the code strokes the ego of the most senior staff members, it will survive.

1

Single-header C++ libraries are often thought of fondly for this reason.

System of ego

For many years, Christopher Alexander worked directly on building sites while also continuing to work as an architect. He learned the engineering skills and builder’s yard talents necessary to create with his own hands. He knew the materials better through this immersion. He worked with others to construct many buildings and worked with organisations large and small. He knew the interactions of those in the trade better through this process.

Alexander shared his knowledge as he formed the written works The Timeless Way of Building[TTWoB79], A Pattern Language[APL77], and The Oregon Experiment[TOE75]. These tomes weren’t produced from some ivory tower but from the mind of someone proving their theories in the field. A little later, he worked on The Linz Café[DLC82], a project for a single commercial building, and then did the same with a housing project, publishing The Production of Houses[TPoH85]. All these projects increasingly tie together his ideas into a web of techniques, finally bringing something genuinely new to architecture.

Initially and in parallel, the architectural establishment praised him from their vantage. His new work created an opportunity for revolutionary techniques in planning buildings of many scales. His work on A Pattern Language was used directly in the planning and construction of many areas of the University of Oregon. And he founded the Center for Environmental Structure, to which most of his activities were linked.

The problems he saw in architecture reflected back on him as those invested in the status quo found him difficult to accept into their world. His theories began to break down barriers previously held to be impenetrable. Architecture had, before this point, been viewed as an art1. It required many years of study to become an architect who could put together a building expressing the values of its commissioner while maintaining the physical constraints of construction.

While the idea of creating a pattern language was novel and exciting to the architects of the time, the problems started when those from the establishment realised he had made something that was not built to support artistic visions2.

This is not to say Christopher Alexander’s buildings were purely functional—quite the contrary. However, anything produced using his methods would be something that used art to complete the form and bring harmony. Artwork and iconography were materials used, like wood and steel, to make a building more fully what it should be.

What is often called the “detail” of the building — its fine structure — is not some kind of icing on the cake, but its fine structure, the essence of what it is, and how it makes its impact upon us. The detailed pattern and ornament of which a building is made, is as much the essence of its structure, as the arrangement of sodium and chlorine atoms, is the essence of salt; or as the detailed arrangement of the amino acids is the essence of a human chromosome.

— Christopher Alexander, A Foreshadowing of 21st Century Art[AFo21CA93], pp. 7–8.

Christopher Alexander did not think of ornamentation as something you add to a building but as a valuable, perhaps essential part of it. In effect, you might say his buildings used art, whereas for modern architects, their art used buildings.

The following three images are kindly provided by the Centre for Environmental Structure. Though they are all Christopher Alexander’s work, they are each different in detail because they use art and style to deeply resonate with their purpose. None of the art is for the sake of some statement, but instead to help complete the building.

Corner wall of the West Dean Visitor Centre

— Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 2[NoO2-02], p. 405.

The West Dean Visitor Centre (above) is a beautiful building constructed from both traditional and modern materials. The combination of bricks and flint is a style in keeping with local buildings and provides a sense of continuity. The overall design is unique to the project but seems emergent from the environment rather than an attempt to announce itself by contrast.

Wooden ceiling of the Eishin hall

— Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 2[NoO2-02], p. 429.

The wooden forms of the Eishin Campus include many works of beauty while also providing essential structural integrity.

Heisey House in Austin, Texas

— Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 2[NoO2-02], p. 457.

Projects for families and individuals, such as the above, were developed to suit the needs of the homeowners and the land around them. Beauty was seen as a requirement during development, not something tacked on after.

Democratised development

A Pattern Language was accessible. People could use it to build the places they lived, loved, and worked in. It wasn’t for monuments or edifices dedicated to a concept or king but for places where memories could be made and communities could grow. It was intended for a manual labourer to read and understand the necessary steps to build according to their needs. With it, they could build their homes with only the occasional correction. This threatened the establishment. It empowered laypeople to solve their architectural problems without resorting to prefabricated construction. It also threatened governments by suggesting people build their work environment around how they worked best and not according to easy-to-verify modular designs.

This alone would have been enough, but it also appeared to give nothing to those who wanted to build to express themselves or create a physical manifestation of a statement. In the light of modern architecture, this was absurd and dangerous. The architectural establishment fought back by claiming Christopher Alexander was ‘anti-intellectual, naive, soft-headed, conservative, and uninterested in architecture as art’3. But beyond that, the threat went deeper. His work attacked their very values. It silently insinuated that the idea of an architect wanting to leave a mark and make an impression was itself, a foul goal. He was a massive proponent of an ego-free approach to construction4, believing ego was at the root of many of the most discordant buildings[NoO2-02][NoO3-05].

Architects had been brought up to believe their trade was all about making an impression, expressing something meaningful with the materials they had. Architecture was an engineering science, but the basis was artistic intent. When Christopher Alexander came along and claimed architecture was a job you should do to satisfy human needs, he implied their whole career was a sham—or conceivably worse, a destructive, wasteful distraction in the world.

Across architectural literature on Alexander, there was a tendency to talk about him in the past tense, as a curiosity, not a futurist. They would occasionally downplay his paradigm-shifting views as dreamy references to a time long gone and of no particular value in the modern world[NoO4-04]. In 1982, this bizarre reaction was used to create a bit of a performance in a staged debate[Debate82] at Harvard University between Christopher Alexander and Peter Eisenman.

At one point late in the debate, Christopher Alexander became angry at the idea of actually seeking out disharmony in architecture. He said:

Moneo intentionally wants to produce an effect of disharmony. Maybe even of incongruity. … I find that incomprehensible. I find it very irresponsible. I find it nutty. I feel sorry for the man. I also feel incredibly angry because he is fucking up the world.

— Christopher Alexander

For this, he is applauded, but the modern architect Peter Eisenman later said:

How does someone become so powerful if he is screwing up the world? I mean somebody is going to see through that …

— Peter Eisenman

This then led to further arguments before the debate came to an end.

For me, Eisenman’s belief that there would be consequences for making mistakes or causing problems was probably the root of many issues impeding Christopher Alexander. I suggest Eisenman’s belief in some inherent corrective justice was a quaint Victorian notion, to pre-use a phrase we shall encounter later.

I’m afraid I have to disagree with Eisenman on this point. I’m a child of the 90s, so I have seen plenty of evidence in my lifetime that people doing the wrong thing do not get pulled up for it if they work within the letter of the law. As a child, television taught me that bad people were punished, but in real life, I was taught people could get paid a lot more money if they did the wrong thing. What is at play here is a system that doesn’t just tolerate someone screwing up the world, but either doesn’t see it5 or possibly applauds it for testing the boundaries of artistic expression. The most salient point here is that mistakes are subjective. Something is sensing them and reacting—a system.

If you cause problems, you will only be noticed and blamed if they’re the kinds of problems that are obviously problems and you’re obviously wrong. There’s also the filter of these problems having to be problems for the people in power. There are many things people can do or ask people to do that can slow things down or increase waste, while on the surface, appearing to be very normal and acceptable or even the obviously correct action. In the case of Peter Eisenman and Moneo, they both have an institution-sized self-congratulating in-group supporting their egocentric approach to architecture. That’s a lot of people in power not seeing anything they would consider a problem paying these modern architects tribute for the damage they did.

However, we’re not institutionalised architects; we can think for ourselves and see through the faux intellectual arguments for the balance of harmony and disharmony. We can point out that balance is harmony, so introducing discord and disharmony to balance with harmony is like showing both sides of the climate debate. It’s disingenuous at best. It’s sneaky rhetoric with no real value other than maintaining the status quo—a world in which those who do damage are rewarded, not reprimanded.

I believe harmony is about striving for a form of invisibility. Ego is the opposite; thus ego is not harmonious.

Christopher Alexander was fighting a system that had emerged from traditional architecture but was no longer playing by the same rules. The new goals were no longer aligned with the people who used and inhabited the buildings. Instead, it worked for those who put up the capital to build them and those who needed the commissions. The system rejected the pattern language approach because that approach denied the architect their fame and provided no opportunity for the client to flaunt their wealth.

This system and its response are mirrored in many structures of authority. The software pattern movement suffered a similar fate. I’m convinced that each year, at least one organisation suffers the same rejection of progress. You need to understand systems theory to fight against such processes. In this case, ego and authority used qualification and accreditation to protect those who would have suffered most under a revolution in architectural thinking.

3

Evidenced in a book review of A Pattern Language by William Saunders, Harvard Design Magazine, Number 16, 2002, https://www.gsd.harvard.edu/research/publications/hdm/back/16books_saunders.html

4

Early works (such as [TTWoB79]) refer to it, as do later ([NoO1-01]).

5

In John Gall’s The Systems Bible[SysBible02], on page 89, he states the principle COLLOSSAL ERRORS TEND TO ESCAPE NOTICE. He then provides examples, including the dismantling of the street railway systems of America and how the replacement by private automobiles increased traffic density and energy consumption a thousand times over. And yet, the impact has hardly been noticed, and he even claims some call it progress.

1

[Grabow83] p. 30.

2

[Grabow83] p. 178.

Extrinsic and intrinsic effectiveness

We need to understand the difference between intrinsic effectiveness and extrinsic effectiveness. These are the terms I use. I believe they unambiguously capture a noteworthy attribute of systems. Intrinsic pertains to things internal and essential, and extrinsic applies to those attributes from without. Effectiveness in the sense of systems is their value, and anything’s value to itself will be related to its continuation—its autopoietic strength.

Most people think about extrinsic effectiveness when it comes to their code or engineering projects. It’s your typical performance measurement. It’s your acceptance criteria or your metric for success. In effect, extrinsic effectiveness is how well the system performs when measured against your selected goals, the measurements someone above you in a hierarchy would be concerned with, or the ones the market responds to. It’s often the part you want to maximise or adjust to reap the amplest bounty.

Elements can straddle the gap between these domains. Anything that does well according to its extrinsic metric will likely be repeated, continuing the system based on extrinsic values. In effect, introspective systems turn the extrinsic effectiveness of their components into intrinsic effectiveness by being an environment that selects for those attributes.

Opposing forces

The intrinsic effectiveness domain relates to the performance of any system in regard to maintaining its existence, that is, the system’s inherent survival traits. It’s not the same as extrinsic effectiveness and can often be opposing. Consider the power of poorly managed agile development. The system of badly implemented Agile principles is powerful because, like governmental systems, anyone inside the system saying it’s not working and it’s hurting more than helping will be told, ‘It’s not X’s fault. You’re not doing X right’. The intrinsic effectiveness of a system can be its immunity to contradiction or removal. You might recognise this as the attributes of a virulent meme.

Remember, each system has two different areas of effectiveness. Systems with intrinsic effectiveness include those of doomed complexity. Such as when an architectural design or pattern appears to be more flexible or cleaner than the straightforward dumb technique, but needs fix after fix after fix for each corner case encountered, thus rendering the clever, clear solution neither clear nor valuable. But it is clever in an evil way, as it is self-maintaining.

Another example is that when patches or workarounds become the norm, people should consider how patches affect normal operation. Current CPU architecture results from our coding techniques over the last few decades. If you honestly want to get the most out of the available materials, you can’t. I mean the literal material, as in silicon, copper, and gold. The architecture we have these days is not an optimal solution. Rather, it’s a solution to a different problem, the problem of running code as it has been written up until now.

Similarly, GPU drivers are the result of the most commonly used graphics techniques. The Vulkan® API came about because the set of capabilities of a GPU had stabilised, and the rate at which new clever optimisation techniques were arriving was slowing down. The new API removed some of the previous limitations.

We will likely see the same pattern play out every decade and a half. New technology will eliminate the complex helper functions that burden the highly efficient architecture developed under the previous generation’s best-performing solutions.

Some people believe this happened with mobile CPUs and will occur with newer architectures, but the CPU domain is strange in that the dominating consumer PC operating system holds it back from taking on new technology. A walled-garden operating system allows for easier migration, and we have seen that happen multiple times with Apple® switching architecture not just once or twice, but three times. The intrinsic effectiveness of the x86 instruction set is the symbiotic relationship with the Microsoft Windows™ operating system’s commitment to backwards compatibility.

Extrinsic and intrinsic effectiveness are orthogonal to each other. You can have one without the other. The problematic systems are those tenacious ones with weak extrinsic effectiveness. And that leads into the next section: patterns that exhibit strong intrinsic effectiveness but have weak or even negative value to their inhabitants.

Anti-patterns

As said before, anti-patterns are self-reinforcing, like all patterns. They are feedback loops, but unlike other patterns, they consume energy from their hosts rather than protect or repair.

We see an anti-pattern in action in the staged debate[Debate82]. Peter Eisenman appeared to believe there were consequences for making mistakes or causing problems. This belief is probably the root of many issues in many organisations. But it’s just that. It’s a belief. It’s not true.

I will repeat again for those who missed it earlier: if you cause problems, you will only be blamed for them if they’re the kinds of problems affecting those in power.

It’s not just in architecture. These cultural anti-patterns are everywhere. Think about the number of times you’ve heard someone suggest,

'I think we've got a communication problem. We need to have people working closer together to resolve this'.

And no one calls them out on how working closer on a project can decrease speed because communication becomes a massive overhead. Or perhaps they say,

'We have too many meetings, and nothing is getting done. We should trim them back. Let people concentrate on the work'.

But again, no one fights back, even though the meetings could be lengthy because things still need to be disambiguated, and getting stuff done would be getting the wrong stuff done badly.

Both these statements could be true. However, you could make either statement about the same situation, and neither would be out of place. So which do you choose? When an observation cannot be denied, and the action seems like a reasonable correction, why would you try to stop it? This action-paradox blindness can affect any group that prioritises solutions over analysis.

This also happens when you are surrounded by people who think similarly or when the solution is outside your normal range of strategic comprehension. We reinforce bad solutions to our problems. As another example, if everyone has been educated in business in the same cost-accounting way, you can end up with everyone trying to cut costs rather than raise profits, dooming the company to a slow death.

A system will hand out rewards for actions it considers correct. If it hands out a reward, the action will be repeated or strengthened. If the senses of a system are tuned to reward poisoning the well, the well will be poisoned while people gather around to cheer on the poisoners.

People have such a deep belief in natural justice that they can’t see the anti-patterns. However, you don’t need to recognise or understand the feedback loop to be affected by it. Interestingly, you can ferret out anti-patterns without knowing their specifics, as many use the same defence mechanisms to maintain their presence.

Age as a defence mechanism

Age provides a sense of validity, even when the ancient element or action is detrimental. We spent a long time assuming bloodletting was a good treatment for diseases. I can fully well imagine a town planner saying ‘Zoning must work at least to some degree because it’s lasted so long’, but this assumes age is an indicator of goodness.

Zoning as a way of restricting the types of buildings in an area has been around for a very long time, going back as far as disallowing certain building types within the walls of a medieval walled city. But what we normally mean by zoning, where the government maps out residents and industry and strictly enforces the law, has only been around for a little over a hundred years.

Just because something has been around for a long time doesn’t mean it’s good. It’s hilarious that the people who use this argument for why zoning is good are the same ones who say traditional architecture isn’t good because it’s old. Something must be wrong here. It must be about more than whether or not something has worked for a long time, as that only means it’s remained active after installing itself. That’s the reason why we have to think about the differences between extrinsic and intrinsic effectiveness.

Zoning was an idea that worked to some extent, but we misunderstand what success means if we ignore intrinsic effectiveness. We need to reconsider these stories in light of what keeps them alive. The property that makes anything a living pattern is its tendency to survive. To this end, we should ask if something is fit for its environment. So, we must investigate the environment.

For zoning, the environment is an everyday North American Western democratic policed capitalist environment. It’s an environment where money will tell you what will get made, as breaking the rules has very little value when there are armed guards to deter you. In this case, if you start a practice of zoning, you will end up reinforcing the zoned areas because one zone type generally begets another in response.

An industrial zone begets workforce residential zones, which beget commercial zones. Once you have many of them, external systems cater for them. Road layouts and utilities become zone-typical.

  • Concrete pouring contractors pop up next to highways built with concrete so that their concrete can reach greater distances.
  • Systems to build become specialised, with timber merchants extending into making regular-sized stud walls, standard height stairs, and predefined apex angles.
  • Shops and malls are built with ample parking to cater for the distance to the residential or industrial areas.

Buildings are erected in zones before their need has been established as developers see a pattern of demand, and being early can lead to profit. More garages and car dealerships appear as the demand for public transport will far outgrow the capacity to provide it because funds will be allocated to services with greater voter impact. More cars demand more fuel and infrastructure, so land is claimed fast to keep costs low. The low land cost makes building large low houses cheap, extending the sprawl’s breadth and making car transport ubiquitous.

All these support systems and the reactions to them make it harder to build non-zoned areas. Non-zoned areas are seen as complicated or bespoke. Centralised command and control economies are generally wary of bespoke constructions. They sound expensive and unregulatable.

But it’s not all bad, right?

Zoning builds infrastructure efficiently for a known purpose. It’s good for keeping people’s homes away from pollution. It’s excellent for making deliveries to commercial buildings easy and non-intrusive. Malls that take deliveries through an entrance separate from the public parking area allow for constant deliveries throughout the day without interrupting commerce or the public from using any of the facilities. In an old city such as London, delivery trucks can get in the way of pedestrian traffic and passenger vehicles.

It would seem that zoning does have a positive value. But it’s an anti-pattern because, despite its apparent benefits, it also has worrying drawbacks.

When you move people away from pollution caused by industries, you make it easier for people to forget they exist, reducing the incentive for the industry to comply with regulations. You also move people away from each other during the day, thereby not only extending the time families spend away from each other by commuting but also inhibiting bonding over breaks such as lunch.

You make it harder for neighbourhoods to build up, as many inter-family bonding activities spark from recreation or work. When you travel by automobile to commercial and industrial districts, you lose another link to your neighbours. Making cars, trains, and buses the standard way to get to work and play produces more roads overall.

All this leads to less land to build on for everything else. It also puts more cars on the road, which is energy-inefficient for the country but a natural consequence of failing to build good public transport into your zoning laws.

What happens when you don’t have zoning? You end up with cities that have their own problems, but you don’t end up stuck in a cycle of enforcing non-zoning. London is a large city with little in the way of zoning laws. Building up the city has taken a very long time, but its structure allows walking to work even today. European towns and villages are walkable, with homes often less than a mile from commercial and industrial workplaces. Whether this is better is somewhat subjective, but the health of those living in European countries is, as a matter of record, better than that of the average U.S. citizen.

Now consider pull-requests. They work to some extent. However, the lack of fast feedback leads to increased rework and they invite more work in progress as you start on something else while waiting for a review. As a reviewer, they interrupt flow which also indicates movement between developers. These are all negatives according to Lean methodology. Is the pull-request concept an anti-pattern?

We’ve always done it that way

'But it's how we've always done it'.

Except, nothing is ever how we’ve always done it. Someone was the first, and someone will be the last. What’s new is not easy. What is old is easy, even if it’s complex. The talk by Rich Hickey, Simple Made Easy, points out the difference between easy and simple and hard and complex. But the one point that hits home regarding the anti-pattern of We’ve always done it that way is that what’s familiar is easy.

When you’re an old hand on a project, complex things can appear easy because they are familiar, even if they are complex. When you’re new to a project, everything is both how it’s always been done, and yet also strange and unfamiliar.

Therefore, to you, because nothing is familiar, nothing is easy. Some things are both hard and complex. When you point out how complex something is, you can be met with dismissive comments due to how familiar, and therefore easy, the complex design seems to the old hands.

When you arrive fresh at a project, you expect everything currently there to be a rule for how things are done. It’s very difficult to announce,

'Well, maybe it shouldn't be done this way'.

While you’re still getting your bearings, you want to ensure you understand everything. You ask questions, but they’re usually in the style of how, not why. And that’s normal. People won’t argue against existing stuff easily. They haven’t the confidence. If they’ve just turned up, then everything is solid and known and unchanging to their eyes.

People onboard to a team and notice things that shouldn’t be the way they are. They will wonder why but not say anything. They can’t claim it could be better because they don’t know the level of investment. Maybe it’s only been there for two or three weeks, and someone who doubted their understanding of what they were trying to solve implemented it on a whim. It could have been there for many years and shipped multiple products. Without asking and knowing, it’s difficult to raise an issue. It saps your energy, and you only have a limited amount of willpower. When you’re new to the job, you’re also worried about ruffling feathers, so you tend to be nicer than you probably should be.

We’ve always done it that way is an anti-pattern—a really strong one. It’s reinforced by people coming late to a project who have better ideas but feel they can’t say anything. It reinforces the idea that we should restore rather than repair or improve to achieve something different and better.

We’ve always done it that way is a case of being stuck in a valley of success in a mountain range you can’t get out of because everywhere else looks terrible from where you are. You can’t even see the real problems you’ve been causing.

They fight for themselves, whether good or bad

Positive patterns, not anti-patterns, create negative feedback loops (self-righting) that reinforce good and healthy actions for their environment. They create systems that fight against disruptive forces. They repair damage and self-create when other patterns are present. But much of the same applies to anti-patterns. The only thing that differs is our interpretation of the result of the actions. If the actions are not seen as healthy for the hosts or environment, we call them anti-patterns. Destructive anti-patterns can create self-fulfilling bad behaviour. Just because a pattern is self-reinforcing doesn’t make it good or bad, just successful. We must always remember to equate success with persistence—not its compatibility with the agents hosting it but how it adapts to maintain its existence.

We define a pattern as good or bad, not because of its self-reinforcing nature but because of the result according to some observer or inhabitant of the environment in which the pattern plays out. In effect, there are no objectively good or bad patterns. All patterns fall somewhere along two axes: the axis of strong or weak reinforcement and the axis of the opinion of those whom the pattern affects. We take the side of the humans, the developers, the customers, and organisations concerning these patterns. But remember this difference: a pattern can be objectively good on the survival axis but is only ever subjectively good on this second axis of benefit to the environment.

Anti-patterns are patterns that have strong intrinsic effectiveness, and we can identify them by the evidence of their negative effect on their environment. Usually, they avoid being placed correctly on the subjective axis, as it affects their survivability when they are identifiably bad. So, anti-patterns aren’t immediately recognisable; they wear camouflage.

It is a stronger position to feign alliance than to antagonise relentlessly. Anti-patterns fake alliance well. They provide small justifications for their existence and so survive limited investigations. Only when we look deeper into an anti-pattern do we see the problems it causes.

Anti-patterns often cause a cycle of repair. A new corner case you weren’t aware of turns up, and you have to fix the fix to the fix. In effect, anti-patterns often get you solving the solution over and over again. I think Christopher Alexander was aware of them but discounted them as not worth documenting concretely. This might have contributed to how his projects played out. Being aware of the flows of value to those working according to the established system may have allowed him to play the game enough to bring a little more light into the world.

Examples of how anti‑patterns work

Negative examples are much better than positive ones. And because the subject is anti-patterns, I can share some examples that explain how they work. This section can be skipped if you already feel you understand anti-patterns but read on if you wish to garner a deeper understanding of the mechanisms of their survival.

Convenience as a defence mechanism

Sometimes, just being easy and wrong is better than hard and right. These anti-patterns are like sweet food and skipping leg day.

Bringing the problem into the code before analysis

An anti-pattern of increased coupling.

Object-oriented code is a response to the difficulty in creating proper abstractions for the parts of the problems to be solved by software. This is why the problem pieces often end up mirrored in the object design. Unfortunately, when you mirror the problem into the data and code of the solution, you introduce a lot of baggage, expectations, and complications.

When objects in code have a one-to-one mapping with entities in the real world, we have brought too much of the world into the program domain. Objects are more powerful when they represent larger concepts. Because objects are used to represent entities from the real world, with all their meaning still attached, analysis of the state of the world becomes complicated and more expensive.

If your language allows you to bring the problem into the codebase, that implies you haven’t been forced to analyse the problem before writing the solution. Most solutions are complex because they solve an ambiguous or poorly-refined problem. So, any language that allows you to represent a poorly-refined problem will not give you feedback on your next steps. Instead, it will enable you to create an ineffective solution to a problem sooner. Solving problems fast will create a feedback loop that implies this approach is better, as solved problems look like progress. But this is an anti-pattern; we know complexity, hidden or not, is the doom of all large projects.

In Domain Driven Design[DDD04], it’s stated that the domain will be revealed throughout the duration of the development. That means the objects, their names and what they mean are not set in stone at the beginning. It is, in fact, the core activity of an engineer to fully understand a problem and remove ambiguity before settling on a design.

Suppose the domain is to change, and the problem will be better understood over time. In that case, bringing the problem domain as it is understood in the first development phase into the codebase invites change at a fundamental level later on. This is good if you have planned for such a major upheaval, but most languages don’t have good object model migration tools. Refactoring tools are good, but not magic.

Last minute creation, or FetchOrAllocate, GetOrCreate, or FindOrConstruct.

An anti-pattern of repository locking.

We encounter the same issues as with the construction of Singletons. When creating the Singleton on first use, we have to lock. When we create an entity on demand based on a unique identifier, we also have to lock. But there’s a difference inherent in locking against a unique identifier rather than a Singleton. When the location of the entity is not a simple globally allocated slot, but instead some dynamic container, we have to contend with locking that container. Now we can potentially block another entity registering to the same container at the same time.

We see here that the issue with last-minute creation into a repository is that it blocks parallel access. This means you can end up with deadlocks if the entity needs to access different repositories to complete its construction. So, what is the wisdom here? First, the problem with Singletons is that they need to lock their location before they can create the thing to put at the location, so that can cause a deadlock. Second, a last-minute creation adds further complications to that when it’s a uniquely identified object in a repository due to it needing to lock the container of the repository and lock against multiple construction.

We can suggest using containers with concurrent append or insert properties to avoid the issues with deadlocks, but it doesn’t solve circular dependencies between different entities, nor does it really solve multiple construction. Those problems seem like problems of usage, not the pattern itself. Another solution would be to have the constructors of the entities not allowed to call upon other repositories until a later moment. Allocate if necessary, but don’t fully construct until used by that which fetches the entity. In effect, prefer dumb constructors.

Obvious truth as a defence mechanism

Best practices that conflict with other best practices. Things that seem obviously true but in fact may not be. Assuming your problem is unique. Acting on common expectations without evidence.

Physical code layout matches logical

An anti-pattern of unnecessary coupling.

We don’t remember anything in great detail if we have only visited it briefly or been away for a while. We are therefore disadvantaged if we need to keep bouncing between different source files to see how the code works. One class per file is an anti-pattern because we have to jump between files to track the flow of messages between objects.

But if we have to bounce around between files to figure out what is happening, is that the symptom of a different problem? Yes, and it’s because the value in some code is the emergent properties of the interaction between elements. So why is the interaction held across multiple files?

In some languages, this is inevitable. Java demands it as part of the language. But in C++, keeping files and classes mapped one-to-one is a cargo cult anti-pattern. The pattern maintains and reinforces itself through We’ve always done it that way but is often at the root of longer compile times and moves some optimisations from compile to link time. As linking is harder to multi-thread, you push some phases of your compilation into a serial bottleneck.

In data-oriented programming (not data-oriented design, but a different, functional-programming paradigm that espouses the separation of data structures from the transformational logic and eschews mutability in favour of a monadic style for transformation), logic can be grouped by the intent. Utility functions are grouped as simple verbs for a system, and complicated functions are grouped by usage patterns. It follows that a developer can read and easily understand these transformations, as they are all physically located near to where they are used.

The real problem is how related classes are so often kept on different ‘pages’. Moving things away from each other when they are related makes it hard to read the flow of the code. Encapsulation is often used to claim the opposite. Nonetheless, the chance a class is fully understandable by its interface is low, which means you must often inspect the implementation. In debugging and maintenance, this is always the case, even in well-defined APIs; you know something is not doing what it claims to be doing, so now everything is untrustworthy.

The physical layout of the code should match the easiest way to read it. It should follow the process the computer would go through. Whenever the next line the computer runs isn’t the next line in the source code, you’re adding comprehension blockers.

Overgeneralisation

An anti-pattern of removing important connections.

Overgeneralisation is when you lose specificity. It’s when you had information you could have used at one layer of the process, but along the way, due to the generalisation of the API, you could no longer pass it along with the request. This can happen by omission of arguments or context, but it can also happen with type sinks. These are places where type information is lost because a common base type is used to transfer the information to the next participant in the communication.

Strings are potent type sinks as they represent many different types of information. In a 2014 talk, Scott Meyers1 uses examples of filenames, customer names, and regular expressions. The danger of sinks is that they circumvent the type system. You have to remember or name the variables to defend yourself against accidentally using the wrong variable in the wrong function.

All types can be sinks, but some are more likely to sink than others. This principle applies to integers and floats/reals, booleans, and even file handles. A floating point doesn’t have a meaning, but kilograms, cubic centimetres, or Celsius do, and using a bald float ignores that. Files are local, memory, logging, stdout, read, and write. When we let the same variable possibly be any of a set of things at runtime, we can lose vital information that could help with debugging and maintenance or performance.

However, I must repeat: all types can be sinks. Your Kg type might be sinking the maximum payload for a vehicle into a simple mass value, whereas the meaning is payload, not total operating mass. If you use the maximum payload value in the calculation for how many of these you can carry on a ferry, you will be overestimating and the type system will not have saved you.

This is the purpose of Apps Hungarian. In Apps Hungarian, the process for naming required you to prefix with the meaning of the variable, such as Kg_, CC_, TempC_, Px_ for pixels, or Kt_ for kittens. Whatever the team decided was an important distinct type got a prefix. It’s also why we don’t like Systems Hungarian. The prefix there was most usually related to the literal type. A filename was an sz_ type, a zero-terminated string, but so was a customer name. Systems Hungarian was a panacea for the poor IDEs of the time.

Future proofing

An anti-pattern of preparation over valuable action.

When developers suggest building reusable code at the outset, they take a gamble. They bet their code will be sufficiently well documented, discoverable, and isolable to be used as easily as a common runtime library feature. I wouldn’t make that bet. I think it’s better to make the code solve the real problem I have right now. Better still, it’s good to know the odds.

But reusable is good. It’s an obvious truth. So why not aim to make something reusable? What are the chances the code you write today can be reused by some other project in the future? What’s the chance the code you write can be used in multiple places in the same codebase during the project’s lifetime?

The driver for this anti-pattern is compounded by the wonderful feeling we get when our code is reused. We wrote something that was useful more than once. We shipped our contribution in multiple products. We were a force multiplier for the organisation. But we forget all the other times we built reusable software that wasn’t. We remember the good times. We also forget about all the costs of maintaining the reusable but not reused code we wrote all those other times.

To be reusable, code is more brittle and lower performing, both stricter and more generic. Even though it will only be used in one location to start with, it takes on these constraints in the hope of future reuse. We also need to remember that we did not go hunting through the codebase to see if another piece of existing code could solve our problem, so why should we expect anyone to discover ours?

My principle to defend against this anti-pattern is to accept that reuse is a requirement and never intentionally write reusable code—only refactor code into reusable code when the need for reuse finally arrives.

Team growth and company growth

An anti-pattern of overlooking impact.

Teams work best at certain sizes, but successful teams and their leaders can feel like they need more staff to complete even bigger tasks, and their track record proves they are a worthy bet to take by management. So they get the extra staff, which destroys the team’s cohesion, leading to failure. This is an anti-pattern because it’s a recurring pattern of ruin. It would be best if you protected against it. Awareness of the pattern can be enough, but putting physical reminders in place, such as making the spaces for teams only barely large enough to contain an oversized team, can help remind people without reminding them directly.

Wrongly rewarded

Sometimes, the way we reward actions is wrong, leading to the anti-pattern. We create the Cobra Effect, causing problems by not reminding people of our core goals.

Productivity guilt

An anti-pattern of local optimisation.

What often feels the most important is being productive. Developers feel ashamed when not progressing, even if it’s not their fault. Whatever the environment, studio, home, or cafe, every developer aims to produce. Feeling productive is important. We feel more productive when we can go about our tasks as we know best. Strict rules from IT or policies put in place to reduce cost can add friction and shackle developers. The rules interfere with what we value the most.

However, an organisation cannot listen to the desires of the developers at the expense of being on the wrong side of a historic event. Security failures cost lives when they happen to large organisations. They also cost money and time, which goes against the developer’s wish for productivity.

Feeling productive and being productive are not the same thing. A productive developer can be an idle developer. By not doing anything, they are not causing waste and not wasting anyone else’s time. In this way, a developer can help the organisation. Sometimes, the most productive thing to do is to rest, research, and read. Without goals, explicit or otherwise, employees can still find work to do on a project, regardless of how counterproductive their efforts are, because programmers like puzzles. Better developers like making products and avoid creating more work than is necessary.

The difference between a professional developer and a less-than-professional one can often come down to a level of pragmatism regarding what work they don’t do rather than how they manage to do it all. The professional will prefer to do something with directly associated customer value. The less professional developer will choose to do the most interesting work. Interesting does not mean easy. Developers can get caught up trying to solve tough problems that benefit the organisation very little, so they are wasteful but they are not lazy.

Complicated code is cool

An anti-pattern of inattention to global strategy.

There’s a value problem with complicated code. People are proud to have solved a problem in a complex way. They often see little value in going back and simplifying, or when they do, they aren’t provided ample time to rectify the cognitive load. If you go against the grain and try simplifying things, you can be reprimanded for changing things unnecessarily. This is a cultural issue, and it affects all programming in all languages.

This is a problem more prevalent in seniors than in juniors. They feel the code was hard to write, so it should be hard to read or work with. Not all seniors, but many have insecurity issues or impostor syndrome, and that leads to needing to prove themselves—a badge of honour. They wear the badge in the form of a complicated API or being ‘the only one able to debug it’.

The Towers of Hanoi is an example of a system that does not live because it’s always blocking itself. There is, realistically, only one efficient solution to the puzzle. Many puzzles are precisely this: the idea that there is only one way through. There is a breed of programmer that loves this kind of puzzle, and I call them Solvers. Solvers will work through tricky puzzles happily because there’s something rewarding about the challenge of finding a solution or the only solution.

Solvers crave this feedback. Their goal is to solve optimally. With these problems, they can at least be confident when they have found a solution. But even an optimal solution might be optimal in the wrong domain. Think about how you design a housing plot, ensuring you have space for a few cars. You can save space by making a long, narrow drive. But having all your vehicles on a single narrow driveway costs you time when trying to get one car out if it is parked further in. Optimal solutions in space are rarely optimal in the domain of time.

If solving a puzzle or complicated code problem is rewarding, then why would a Solver choose to make programming more straightforward and, therefore, intrinsically less rewarding? There is nothing inherently good about the challenge of making everything fit and work together. Any goodness must come from getting it done. The buzz of a challenge runs counter to that. ‘Solution-finding’ is a craft people can get passionate about, but work is not leisure. So, solving complicated problems is a strong anti-pattern affecting some developers. In fact, some get caught up solving the same problem for years, either over and over or just once to a greater and greater degree, with no end in sight.

1

The part on type sinks is at 56 minutes, near the end of The Most Important Design Guideline: https://www.youtube.com/watch?v=sfLZ7v9gEnc

Some things we want are self-destructive

These are the opposite of anti-patterns. Where anti-patterns have robust intrinsic but weak or negative extrinsic effectiveness, these self-destructive patterns invert both axes.

These are the self-destructive organisations of elements that won’t stay put unless maintained. Things such as hexagonal1 architecture don’t come about naturally. They have to be invented and adhered to on purpose.

Things that fall apart when you aren’t actively maintaining them can’t be patterns.

Without self-reinforcement of the configuration, it can’t self-form. The maintenance cost reduces the value of these kinds of solutions. We have to overcome the lack of reinforcing feedback with diligence. This is where we must use our authority and persuasive skills to maintain a good state.

Axes of Effectiveness Axes of Effectiveness.

Some non-patterns can self-maintain once started. Quite a lot of pseudoscience is self-maintaining. Each instance of pseudoscience is undoubtedly not self-starting2 because, by definition, it is not discoverable from truth or empiric evidence. But note how quickly detractors are seen as the enemy, how evidence is seen as a conspiracy against the science, and how any positive evidence is often blown out of proportion or regurgitated without criticism.

Architectural idioms are valuable because they can provide repeatable solutions to commonly occurring problems. We assume they are idioms because we can’t be sure they are self-forming. They aren’t self-destructive, as idioms still have a mechanism for spreading. There are many things in this non-self-destructive set. They include modern observations such as domain-driven design, microservices, language construction idioms such as the Backus-Naur form, and relational database idioms and their normal forms. There is a huge set of things that may reoccur, but we lack evidence that they are self-forming. Many have tremendous value and provide a foundation upon which future realisations can be made. But many are likely not patterns.

Then there are the self-destructive or exploding patterns. Rules about spending, for example. You cannot save up to be Google, but organisations still try to cut costs rather than increase revenue as a default reaction to reduced profits. You cannot cut costs to become Amazon; you must invest to create value and grow. If your number one intention is to grow, cutting costs is usually a waste of time.

Management

In any organisation that has a production pipeline, you will find that it has a natural tendency to grow longer if you don’t observe the value in waste reduction. DevOps was built on the concepts of the Theory of Constraints and actively fights against the self-destructive nature of taking the obvious next step.

The obvious route for a manager is cutting costs and optimising what they can. However, you can only optimise what you can see, and it used to be that many managers were blind to what to optimise due to their education. Managers would buy more capacity rather than learn to exploit what they already had. In software development, the obvious solution was adding more developers to a project. The Mythical Man-Month[MMM75] addresses this falsehood. After the DevOps revolution, the recognised solution was to reduce end-to-end iteration time so that developers could create value and receive feedback faster. It’s a shame that in the nearly 50 years since the book was released, not all management teams seem to have acknowledged these observations.

The crisis of management alludes to the exploding nature of these self-destructive patterns by way of saying that production processes become worse over time, not because of bad decisions but because of a lack of informed decisions. When we take the default action, the system’s self-forming, self-destructive nature shows itself.

The default action is the action of the unguided system. Management is needed when a system naturally moves away from the extrinsic good state and towards a contradictory but intrinsic good state. Management, therefore, is always needed when the equilibrium of a system does not match the core goals; otherwise, it will always regress to a worse state.

In effect, management’s duty is not to execute the obvious actions but to make the correct actions obvious.

1

This is the Ports and Adapters approach to layering software so that business logic stays within well-defined boundaries and only uses ports for communication with its dependencies. Adapters allow the business logic to avoid being tied to technologies and new use cases to use existing logic.

2

A new pseudoscience pops up every now and again, so the concept itself, like constellations, is a pattern of human behaviour, but each one is idiomatic.

Differences Between Software and Building Architecture

Because the environment defines fitness, we can now explain why software and traditional engineering have such big differences. The nature of the environment of the builder and the end user is always different. But, in software engineering, there are multiple environments and more entities within them. This means software’s fitness is measured in unexpected ways and sometimes seems oddly unrelated to the end user.

In this chapter, I will explain how the environments differ and what impact that has on the developers and the code they write. I will explain how the type of work is different at a fundamental level and how it impacts how we work and what we value in the product.

Environments and inhabitants

In one sense, it is simpler for patterns to be recognised in the real world, as the builders and the new owners of the buildings live in the same reality. They breathe the same air and touch the same ground. The new owners lean against the same walls and pass through the same doorway the tradespeople used when constructing the building. The patterns of physical construction make more sense than those of software, as the same values apply to forming and form. We evaluate the construction environment with the same rules of thumb as we assess our final buildings. This is a key difference and relates to a point that Christopher Alexander would rail against.

Christopher Alexander appeared to believe that you cannot design a building on paper. Eventually, you will always need to do the final designing on-site. There is no substitute for being there. However, when we construct software, we don’t assemble a thing in a place, and often, we don’t resemble the future inhabitants.

With software, we rarely construct into a pre-existing situation. Physical construction builds into the world, with all its rules and regulations and the fashions and expectations of the users. There are comparable examples when you write applications, such as developing for a domain with a dominant competitor. In that case, you have to assess the existing users’ expectations and learn the metrics they will use to judge your new application. But notice that we can only sometimes use our judgement to determine what would satisfy the users of our software. That’s why we have to resort to A/B testing. And the new occupants may have very different values from us if we are unfamiliar with the application.

Builders and end users of physical construction share quality metrics because most of them are human. For software, what makes an application’s user comfortable is not biological but often idiomatic or habitual for that user; thus, it is much less likely to be in common with the developer, whose habits revolve around developing and testing their corner of the software.

Then there’s the construction site. If we compare the two variants, we see a difference. The final inhabitants of the physical product live with the same thing handled during construction. But this is not so in software development. Not so much because end users don’t live within the running software, for that is untrue. They do. They have a mental model of what is going on, and they inhabit the space of the application. No, what is different is that the software engineers don’t work in that space. The construction site is unrelated to the application. The end users don’t inhabit the code. The software engineers don’t inhabit the application (except when testing it). With a physical building, both inhabit the same product.

A split like this lends itself to malpractice because the product won’t show signs of dirty work. The developer’s world can be as messy as they like as no apparent ugliness crosses the divide into the application’s presentation. If a builder were to have tangled spaghetti of wiring and pipes along the house’s ceiling, the new owner would be able to see it and would complain about the potential hazard. With software, you can’t see the dangerous wiring. That’s what I mean when I say there are two distinct environments—one world in which the product lives and the other in which the product is designed, constructed, and produced. So, software engineering is closer to the process of designing a production line that produces goods than it is to architecture1.

But even factories have standards, right? Well, they do now, yes. But these standards came about because people uncovered potential hazards in their products. Production lines used to cut corners for the same reasons why code is messy now. There was no pressure to work cleaner or safer; often, it was better to reduce costs. Cotton mills and other factories of the industrial revolution were notoriously dangerous places, often producing dangerous goods and byproducts as well.

Now, in software engineering, at least in those places where we write software to meet strict quality requirements, we see standards and processes coming in to ensure the code factory is sufficiently capable of making the product well enough for human consumption. Code safety audits don’t inspect the code; they review the process by which we write it. Inspecting the code would be like checking every chicken for contamination rather than just checking whether the floors are clean, whether there is a rota, and whether the equipment for keeping everything hygienic is present and in good order.

The separation of environments means the environment of code is different from the environment of the product. This separation partly explains why the GoF book[GoF94] lacks UI or UX patterns. Why would there be? It is a book on patterns of the programmer’s environment, not that of the user.

But what does this mean for software design patterns? Architectural patterns are guidelines for solving repeating problems in the physical environment. Does this mean patterns apply to the software engineering environment alone, the user environment alone, or both, or neither?

The answer, I believe, is both. And occasionally, there is a crossover between the domains. Every problem is a problem of unresolved forces in a context. Every environment will recreate the conditions for certain problems over and over again. Every environment will have some naturally occurring wholesome solutions and some attractive anti-patterns. It doesn’t matter whether the environment is that of an application’s user or the code’s author. Every environment where the contents can change and introduce new interactions and usage patterns can exhibit fertile ground for the emergence of patterns.

However, the split environment complicates things. Resolving forces allows you to create harmony in one domain, but if your actions also affect another, you may introduce new unresolved tensions there without being aware of them—and potentially without any way to resolve them without undoing what you have just achieved in the first domain. It’s like a two-sided maze puzzle where every move must simultaneously be valid for both sides.

Unix programs had good user experience due to this domain split theory. You might think it a bizarre assertion, as command line interfaces aren’t renowned for their usability. But I use the word ‘had’ carefully. Historically, they had better UX because they better fit their users and, thus, their environment. The builders and the end users shared tools and ways of working, thus sharing many values on what made a good program. Programmers and scripters inhabited the same domain as the end users. They had common expectations and mental models of how things worked. The tools, programs, and work products all shared the same environment, which could be why the tools worked so well together. Unix wasn’t just a philosophy but also a combination of using the small tools you make in conjunction with others and the collaboration and sharing model, which appeared around this time.

1

This explains why many programmers like playing games like Factorio.

Programming as an environment

Most methodologies treat programmers as a fungible resource. However, junior developers are developers too. If working in the environment requires you to be senior to work safely and make real progress, then the environment is hostile.

Software inhabits multiple worlds; different entities inhabit each environment. Each class of entity has different values and senses of beauty. Sometimes, the inhabitants have a long history with the space and know what is good for them. For example, users know a good UI when they use it as we’re used to using tools with our hands and eyes and getting feedback on our actions.

In other cases, the inhabitants have little experience of their world. Inexperienced programmers must work with source code for quite a while before concluding whether it is good. They need help identifying what is good for them. They must internalise the necessary values and judgements defining beauty for that environment.

In physical architecture, symmetry is an indicator of quality, as is geometry. Any architectural pattern language needs geometry because it’s about physical space, dynamic entities, and their activity. Symmetry and geometry make the unfamiliar easy to understand. Why? Because things with symmetry are predictable and take up less cognitive space than semi-chaotic things.

Code structuring is about creating software with others. It’s not about the geometry of spaces taken up by people in a physical way but about passing work between each other and relieving cognitive load.

When constructing buildings, symmetry can determine where things go. It’s how things indicate their purpose and what belongs beside them. Breaking symmetry points out exceptions. In code, instead of symmetry, we have something equivalent; we have variation and commonality. We have architectural guidelines that define where things belong and their default appearance.

So, breaking symmetry in construction is information for the end user, and variants are information in code. It’s how we decide whether to apply Don’t repeat yourself. Do we have specific information we want to portray? If not, don’t repeat!

Those who thrive

Your code base is an environment. As such, certain types will struggle, others will survive, yet others will thrive. Do those who thrive or survive create a better or worse environment for the kind of people you want on your team? The organisation is also an environment. The ‘org chart’ and the building that the team works from count towards the environment. How you reward the team and handle failure are as significant to the inhabitants as linting tools and source control. Your organisation could spend thousands on extravagant outings and team-building exercises or, instead, listen to its members and spend less by improving access to testing hardware.

Decide based on what you are making. How should you build the environment to best suit the people you need to make it happen? For example, is your project a conundrum? If so, you want puzzle solvers to thrive. But if it’s a user-facing project, you should cater to those who care about disambiguating and iterating over UX requirements.

However, regardless of the project, there are types you always wish to support and others you never will. Some people care about improving things and we should nurture them where we can. But beware those who want to show off how clever they are as that rarely translates into value for others.

Paper trail

In physical construction, it is normal for older buildings to show their age through nonconformity to modern standards. You can say the same of legacy code. You can tell by its lack of tests, usage of modern language constructs, and older or less consistent naming or style.

What does your code say about the people who worked on it? Are they clean coders, or do they write only the necessary code to achieve the goal? Are they quick workers, leaving functional but difficult-to-reason-about functions full of difficult-to-read short names? Just as we excavate ruins to discover more about the lives of those living in buildings long buried, we can learn about the people who wrote the code by the evidence of their values realised in the lines of source in a project.

If you’re unaware of the environment in the code and around the humans developing it, you’re missing an essential set of patterns for productivity.

Construction speed and design speed

The final product of the physical building process is a constructed artefact. We develop a structure guided by a plan and continue to use it in the same space and form once the project is complete. The software construction process is vastly different. In software, we routinely construct to verify even small design ideas.

In software, we compile the final product according to our current design, test it, and then go back to designing. We have an intensely rapid design, construction, inspection, and reaction cycle. Those in physical engineering must be positively envious of this because it enables us to construct things to a much tighter spec than they ever could or perhaps should.

If we worked like this in physical architecture, it might look like the following.

  1. We have a design for a bridge, so we want to test it.
  2. We launch a fleet of drones to build it, and they complete the work in under an hour.
  3. Then, we take a robot truck and drive it across the bridge.
  4. We watch and take notes as the bridge collapses into a fireball of destruction.
  5. We order the drones to sweep up all the damage and recycle all the debris.
  6. We analyse the results, tweak the design, and set the drones building again.

It’s such a fantastic situation to be in. And yet, once we have a design we’re happy with, there’s a nagging feeling it’s only guaranteed to work for the tests we ran. So, we know it passes tests for situations in the domain of ‘obvious’ or ‘required’, but we’re limited to what we think is possible. And that’s scary. Would I be happy driving across a bridge that had stayed standing through a hundred thousand tests with cars, trucks, and buses? I’m not sure. Did they verify against high winds? Was there a scorching day during certification, and did it ever snow? Nobody did the maths on whether the bridge should stay up under such circumstances.

It’s not that we don’t think about what could happen in software, but we are convinced we can’t do the maths for all cases. It’s impossible to think of all the situations software could get into. So, we do our best, write many tests, and hope that’s enough.

Or we go deeper. We can use the computer to debug itself. We can verify APIs with QuickCheck, and model checkers can find bugs in concurrent code we would never have considered. Fuzzers help us find ways to break out ingest algorithms and protect us from hackers by leaving fewer gaps for them to exploit. Using the computer’s unswerving diligence and boredom-proof brain gives us a chance to find these imperceptible hazards.

There is a big difference between development as a construction process and development as a design procedure. The final objects have different qualities. Design development has the guarantee that it works for the intended use but has little or sometimes explicitly no warranties about its suitability outside those strict requirements. How many licenses have you seen where there is a clause disclaiming liability for any problems caused by software usage? Probably more than you think, as only a few people read the end-user license agreement, but it’s hard-coded into many open-source licenses.

We know we can’t run exhaustive tests when developing as a construction process, so we build with tolerances and reject unverifiable designs. But because we’ve done the maths, the final form comes with a 20-year materials and labour guarantee. An actual warranty, where we accept liability for any damages and losses incurred during proper product usage.

Personally, I am not going to buy a house sold under an MIT license, no matter how many unit tests they ran on it.

Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You can not inspect quality into a product.”

— W. Edwards Deming, Out of the Crisis[Crisis82], p. 29.

Iteration speed is a valuable tool in getting to a good design, but we have to verify we’re using good metrics, otherwise we can fall into a pattern of positive testing. We must not just test that the code does what it should when used correctly but also verify it does not do what it shouldn’t when misused. Here, the ability to construct quickly pays off massively. We can confirm that our tests work with mutation testing—build something different and prove our tests fail, demonstrating our tests add value.

The difference in speed means that our process of adaptation is different. Instead of making things that are wholesome in their space and adapt easily, we make things that are good instances and make easy-to-tweak designs for their production. We value the ability to accommodate new features, capabilities, tolerances or to heal vulnerabilities. So, instead of working with looser materials as Christopher Alexander did, we work with hard, even brittle materials like the strict mathematics of cryptography, comfortable in the knowledge that they can be changed and restructured on a whim.

In software, the tricky part to modify isn’t the product but the design structure. The materials might be fungible, but the value in source code is not just in what the configuration produces but also in the familiarity of the arrangement to all individuals involved in its development.

Home

Home is not a house, dorm, or bed but a feeling of safety—the space to dream[TPoS58]. Home is our comfort and the beginning of each thought. When we are pushed to criticise home, we do not like it. People have many homes in their lives; a workplace or a park can be a home for the role they play in the world. Each aspect of us knows a home. Even if where we live is not perfect, we find one nook that we can trust and fit our most intimate selves there and relax. Programmers often feel this way about code ownership. I wrote it, I maintain it; therefore, it is a familiar home to me.

Sweeping changes are stressful because they make the codebase unfamiliar to many invested inhabitants. I equate this with a messy desk or workshop. When someone helpfully tidies it up, you can no longer find anything. When my parents took to cleaning my bedroom in my short absence as a young adult living away from home, I felt invaded. It was no longer my space. I did not feel I had permission to make further changes. When a code module I maintained for a year was suddenly subject to a series of pull requests making substantial changes and aligning it with a different module, I felt I had to either push back or take it on the chin and recognise the code was never my own. I had built my home on a right-of-way. It belonged to the project, and my familiarity with it was for me alone.

This is and always will be the strange habitat of code. The coder lives in the codebase for so long, and then suddenly, the product, the construction, is accepted. The codebase is vacated. You finally build the best possible environment to work in to solve the problem now it is understood—and with its success, the world is void of purpose.

It not only happens at the scale of the project but also a room at a time as components are developed and finished. All this is true unless you reuse, extend, and maintain the product. But even in iterative product development, there will always be a time when there are people leaving the project. For each of them, they are leaving home—a familiar dwelling and its associated safety.

Inhabitants and motion

In physical construction, the inhabitants and events that play out in buildings are the only factors we can use to create metrics for the value of the building. We can only measure the value of a construction by those for which the building is intended, whether as their place of work, rest, or play. How those events play out, whether they are effectively performed or fit neatly into the space, can be used to describe the suitability of the building.

In code, the end user and their activities are the critical measure. What the user gets out of their interactions is the only metric by which software should be measured. But this is not always the case. Software is purposeful, and whether it can be found and used to complete a task effectively determines whether it is fit for purpose. The inhabitants are the users, and the events that play out are the users’ transactions, actions, and interactions.

How do these values differ? Well, referring to the insight discussed earlier, there’s a big difference with maintenance. The built object needs to be maintainable for physical construction, whereas, for software, the product need never be repaired. Instead, it is the code that constructs the product that needs to be revised. In some situations, even the data can be thrown away, but usually, it’s just a minor migration task, even when wholesale replacing an application. This is quite different from physical building maintenance. We can’t just tear down a whole house and replace it. People keep their stuff in there.

But this is not always the case. With regards physical buildings, one counterexample is a hotel room. If there is something wrong with your room, the hotel can offer to move you. This is effectively the same as replacing the building. On the IT side, there are counterexamples where software must be maintained rather than replaced. The way servers used to be run was ‘as pets not cattle’, so uptime was extremely valuable, and the ability to make changes to the running system was considered a mandatory requirement.

Physical building maintenance is akin to changing the codebase while your application runs. While possible in most programming languages, this activity is unusual in the present climate. We generally restart services or shut down applications while installing new versions. The feature is built into Erlang; you can roll your own in Python, but C and C++ are more complicated. Stefan Reinalter’s Live++ is an excellent example of what can be achieved. But these are all exceptions to the rule. However, not all systems can keep the product in place; some need to replace the whole running executable. Those that mutate their behaviour to achieve change and leave the source untouched are more like physical buildings. Because we don’t reconstruct with every design change, the design should include protection from things that are not yet present.

The natural software maintenance pattern is to tear it down and build it back up. Unfortunately, this transfers into how we consult with the client. Developers second-guess software users and make changes without informing them. With physical development, it’s relatively uncommon. For one thing, maintenance and change costs time and money, and the maintenance will intrude on the physical location, so the client is aware by default.

The other difference is that with physical construction, even if the design is uniform across a selection of buildings, there’s still a chance to customise specific structures. When the land is a bit smaller or lower, or a road bends a little towards the end, we can make adjustments and local adaptations can play out.

When a design is improved, existing buildings are not automatically updated. This is clearly not true with software. Rollouts and updates are now the norm with software. I cut my teeth on writing software for games consoles. When I began my career, consoles didn’t have network connections, updates, or patches. Those games were almost the same as physical buildings. If there was a bug in the game, it would be in all copies and would never be fixed. Today, almost every game over a certain size has what is known as a day one patch. The process of certifying a game as suitable and bug-free as possible is slow enough that a development team can work those extra weeks to produce a new version with significantly fewer bugs. It’s a strange world we live in. Hardly anyone but the testers and reviewers play the first version passing certification.

The software user has a discontinuous interaction with the application. One day, the application will be different, and there is little opportunity to go back. Changes are thrust upon the user whether they like them or not. The money and time they invested in the application are not seen as a reason for the developer to consider the user’s point of view. Instead, the goal is always new users.

As stated earlier, we don’t always use the value of the software to our users as the metric of success. New users are new sales. Sales are money, and money drives the actions of most organisations. Sometimes, there are romantics who believe there is some greater purpose to their work. But even if you are trying to make a delightful and practical tool for your client, you’re only being allowed to by the organisation because it senses your actions are profitable overall. Harsh, I know, but unless you’re in charge of the organisation, such as with some open-source initiatives, then ultimately, it’s probably money pulling the strings.

Beauty and Quality

Patterns can be lenses. They can guide and amplify the process of using your senses for new solutions. They can help you better determine whether the foundations you are forming are promising or poisonous. This sense of the quality of our solutions is intrinsic in us. It senses what Christopher Alexander called the ‘quality without a name’.

Throughout Christopher Alexander’s research, there’s a thread of something that, at first glance, seems to break with science and rational thought. The architect put up many buildings, so when you first learn about his ideas of using feelings and instinct to make highly consequential decisions, you are left confused. On the one hand, here is someone who devoted their life to constructing buildings that withstand the strictest regulations and environmental constraints. And yet, his core thesis relies on human instinct and emotional responses.

We have been taught that there is no objective difference between good buildings and bad, good towns and bad.

The fact is that the difference between a good building and a bad building, between a good town and a bad town, is an objective matter. It is the difference between health and sickness, wholeness and dividedness, self-maintenance and self-destruction. In a world which is healthy, whole, alive, and self-maintaining, people themselves can be alive and self-creating. In a world which is unwhole and self-destroying, people cannot be alive: they will inevitably themselves be self-destroying, and miserable.

But it is easy to understand why people believe so firmly that there is no single, solid basis for the difference between good building and bad.

It happens because the single central quality which makes the difference cannot be named.

— Christopher Alexander, The Timeless Way of Building[TTWoB79], p. 25.

In The Timeless Way of Building, he uses the term ‘quality without a name’ to describe what he sees in the best buildings. In other books, he describes the quality as that found in the smiling of a face, not the face or the smile itself, but the moment of reacting with one. It’s never the building but how it sits in its environment. It’s hardly ever a thing in any sense, but it’s always about configuration in reaction to a context. The quality without a name identifies attributes of relationships more than of the elements alone. But describing it is difficult. Christopher Alexander takes many pages to explain it.

It is never twice the same, because it always takes its shape from the particular place in which it occurs. […] It is a subtle kind of freedom from inner contradictions.

The word which we most often use to talk about the quality without a name is the word “alive.” […] But the very beauty of the word “alive” is just its weakness.

Another word we often use to talk about the quality without a name is “whole.” […] But the word “whole” is too enclosed.

— Christopher Alexander, The Timeless Way of Building[TTWoB79], pp. 26, 29–31.

He lists other words—comfortable, free, exact, egoless, and eternal—but in the end, admits that no single name captures it. The quality is well-described by these words, but it took me multiple readings to grasp the concept fully.

More concerning is how this sense of quality is easily disturbed. We overrule it with our logic, preferences, history, or societal norms. It’s easier to ignore the sense than use it, which is why it took Christopher Alexander such a long time[NoO1-01] to find the right way to express the questions he used to provoke responses employing it. The question, then, is, why did he feel the need to provoke it in the first place?

The quality without a name is that which you detect when the environment is in harmony. The detection is personal, but it is universal in that every individual will be just a little more comfortable and less stressed than if the surroundings deviated from that equilibrium. An environment extremely alien to us, not fabricated by dissimilar humans with different tastes, but outside our sense of normal or regular, sometimes it feels natural and safe. That almost animal-like feeling of goodness in the moment is our instinct telling us the world is okay. Stay like this. We find it in places, objects, people, and communities. And everyone has this sense, and this sense is, for the vast majority, aligned across humanity.

Christopher Alexander ran an experiment that often goes by the name ‘the paper strip experiment’1. In the experiment, he presented a set of paper strips of seven segments, where each segment was painted either black or white. Every permutation of three black segments was present among the 35 strips.

Over a series of tests designed to determine the perceptual complexity of each of the strips, Alexander had found evidence of an overall ordering. The test results were highly correlated even though each test was quite different in design from all the rest. Without planning for it, he had made a series of tests which where the least inviting to the ego and most aligned with intrinsic feeling.

The tests revealed an empirical order to complexity, but it also revealed something else. After working with the results for some time2, Christopher Alexander deduced the wholesomeness and simplicity of these very abstract artefacts came down to a property he had seen in many wholesome architecture projects. The property of sub-symmetries, later formalised to local symmetries, which I go into more detail on in the next chapter.

The quality without a name has many aspects. This was likely the first one recognised and rationalised by Alexander and his team. It’s a property of most wholesome objects. It was a great discovery, one of many to come. The aspects were developed into the properties, transformations, and sequences of unfoldings described in Christopher Alexander’s largest work, The Nature of Order[NoO1-01]. However, for now, we need to concentrate on two questions about the quality without a name. Surprisingly, they have the same answer. Why is this subtle sense so fragile? And why do we all have the same sense of it? To answer this, we must take a path through the world of beauty.

1

The earliest references I found for the experiment were the 1964 paper ‘On changing the way people see’ by Christopher Alexander and A. W. F. Huggins in Perceptual and Motor Skills, Volume 19 and the 1968 paper ‘Subsymmetries’ by Christopher Alexander and Susan Carey in Perception & Psychophysics, 1968, Vol. 4 (2)

2

Two years according to [Grabow83] p. 197–198. Three to four according to [NoO1-01] p. 190. The gap of four years coincides with the gap between the papers.

Beauty as objectification of quality

Beautiful code is good. But how do we get from the design patterns of nominalisation or separation of responsibility to the idea that beautiful code is good?

Beautiful code is beautiful only when we consider the viewpoint of genetic evaluation. Its ability to reproduce makes it beautiful. Its ability to survive makes it good. Whether it’s fit for purpose is part of it. Whether it’s an environment in which other things can happily exist also contributes. Companies with bad code tend to go out of business, but it’s sometimes circularly defined. The code is defined as bad because it’s non-surviving; the business didn’t survive. Companies with good code can go out of business too, but that’s a weakest link issue, where an accident from a different domain cuts the life of an otherwise healthy organisation short.

It’s a very circular definition, but it’s just the same as how we evaluate and define good or high-quality for biological systems. We measure a genome’s effectiveness by how successful the organism it generates is at reproducing. If the organism dies, its genome has failed. If you assume poor genes lead to a weaker and less successful organism, you have already accepted the circular definition.

Good code will be relative to its environment, and goodness will be a metric of surviving in the face of competition. And yes, if you are shielded from some forms of competition or challenge, your code will likely be deficient in those domains. Without competition, removing poor performers is nearly impossible, as performance won’t be measured meaningfully. If there is no performance baseline, there can be no performance assessment.

But code is unlike DNA. DNA does not have rational authors. The genetic process introduces changes that are simple and random. DNA leverages the power of small, safe changes by combining existing proven DNA. The reason DNA recombination in sexual reproduction is so powerful is that it selects from already proven actions and sequences. Without combination, mutation only allows simple random occurrences to introduce novel sequences. It’s much more likely for a mutation to represent a regression than an improvement. Combinations are more likely to be variations of existing qualities. Think of it as the difference between the likelihood of forming a new valid sentence by using randomly selected words instead of taking random sentences and splicing them together. The chance of obtaining syntactically correct outcomes is still low, but it’s much higher than with single-word changes, and the result is probably more entertaining.

But how does this apply to code? Well, programming tends to consist of providing alternatives to existing code. Our options come from existing code, memory, or creating on demand. But even new code from our imagination is a combination or mutation of something we have experienced. We make changes that are unlikely to be entirely regressive. And if it is, the compiler or tests should catch the problem before it becomes too detrimental.

Programmers can examine existing code and determine what might improve it. Think briefly about how science fiction authors write about genetically engineered creatures; they always seem to be improvements. We modify crops similar to how we make changes in software projects. Introducing plum pox virus genes into plums to increase their resistance is equivalent to importing libraries into our projects.

What does this mean for beautiful code? Well, the ability to tell what the code does and how we can combine it to create improvements is part of what makes it a better survivor, so that aspect must go towards our definition of beautiful. If we can make effective changes safely, we have an advantage over the competition. This aspect, too, must be part of the definition of beautiful code. Whatever makes the source better able to survive and create more of the same source will define what beauty is.

We can’t personally see the beauty, whichever way we look at this. We do not find the most beautiful code attractive. Its properties can only be recognised rationally. We are not struck with awe at beautiful code. And neither are we aroused by a particularly elegant walrus.

But that makes sense because beauty is something in our genes first, our culture second, and only finally in our individual selves. Our sense of beauty starts with our rawest instinct about a thing. We gaze in awe at cliffs and large buildings or from the tops of mountains or the windows on a plane. These senses of beauty (or fear or wonder) are human. Not Western, Eastern, feminine or masculine. All humans see something positive in a warm glow from the window of an abode on a cold night. But this sense is fragile. We logically override the good feeling if our rational self notices a chain-link fence between us and the glow. If we hear alien language from within the building, our suspicion is raised. This is our self overriding the genetic response. Christopher Alexander wanted to avoid this with his paper strip experiment. His abstract test allowed for the most concrete yet honest response.

Why we think things are beautiful can be societal and personal, but whichever layer you choose, beauty is the objectification of quality in that domain. A beautiful dance is a genetic beauty of grace and motion, a cultural beauty of consistency with a pattern, and a personal beauty if you prefer a particular type of dance. What’s useful for us is to see that the quality without a name is closer to pure genetic beauty, even in chairs and paper strips.

But then, we find ourselves in a desperate position. We haven’t been evolving for the last hundred thousand years alongside code. We don’t have any genetic sense of beauty in that domain. So, we cannot see beautiful code at that lowest level. We can only see its beauty in the cultural (coding standards) or personal (familiarity) sense. We may objectively state that some code matches our cultural norms and that we like it, but we can’t assess whether it’s beneficial. Just looking won’t tell us if it’s good. We must act like genetic processes of that domain, making changes and building unquestioningly before seeing it all tumble down to be superseded by a superior piece of code. We can only verify whether code achieved our goals. We can only loosely estimate how likely the code will survive.

This is why software architects have a Herculean job to do. They have to provide a path to follow while blindfolded. Most will choose safety: they will adjust an existing paradigm to fit the purpose, much like how DNA recombination works. Sometimes, the different pieces work out great. Other times, they complicate and cause problems. But then, that’s information to never do that again, effectively neutering the code so it can’t breed more offspring.

Beauty to the inhuman

A dry stone wall is a formation caused and perpetuated by humans. Humans needed a way to set a boundary, to mark a place and limit the movement of livestock. A dry stone wall captures our imagination because it’s built for a clear purpose and helps us survive.

  • Modern architecture is a formation caused and perpetuated by capital ownership, working for hire, and the career maintenance of visionary architects.
  • Cell walls are formed and perpetuated through upkeep by protein reactions.
  • Humans care, but we are as cattle or proteins to the driving forces of modern architecture.

Christopher Alexander claimed modern architecture led to dead buildings[NoO1-01]. He called it less alive than the timeless way of building. He saw buildings immaturely constructed, but I see buildings built by something at a different scale than human individuals. I see the systems of society and the more extensive system of methods and commissioners of construction. I don’t think it’s dead; I think it’s alive and hungry.

Robert M. Pirsig went to great lengths to uncover a better definition of subjectivity and objectivity, first in Zen and the Art of Motorcycle Maintenance: An Inquiry into Values[ZatAoMM74] and then in Lila: An Inquiry into Morals[Lila91]. But I think even he missed something important about subjectivity because he failed to see people as extensions of the gene.

Subjectivity, in a general sense of some things are subjective, does not exist. The lens through which we view a relationship or the context provides the key to what is subjective. When melting ice, there is an objective amount of salt to use. With food, there’s a subjective amount of salt preferable to humans. For slugs, it’s objective. So, even down to simple things like this, we understand there is the subjectivity of the individual, the species, and the context.

So, when we talk about subjectivity, we must consider the subjectivity of what?

Where Christopher Alexander claims modern buildings are devoid of life, I counter with context. All buildings are alive but not necessarily alive in the context of humans and wholesome human interaction. Christopher Alexander seemed to believe that the artificial existed. But all artificial things were created by naturally occurring phenomena. Humans are natural, so therefore, petrol stations must be natural. We only think of termite mounds as natural because we think of termites as natural. A higher being beyond our comprehension would consider our 20-foot steel shipping containers natural. They may think of them as odd and not immediately understandable phenomena created by the creatures known as humans, similar in some ways to the hexagon grid of the beehive.

We must acknowledge that modern architecture is natural. Just because we fail to understand the goals and why it survives doesn’t make it dead. We need to understand what its environment is and why it is fit for it. What makes it better than traditional architecture in this environment? When we apply systems theory, we can ask: What do we need to change about the environment to encourage the construction of better buildings? What would it take for more wholesome structures to thrive?

Imposing solutions

Modular designs are everywhere but only fit when the modules have no character and where the environment is free of life’s intrusions. What environment does modular design exist in? It has a survival trait of being easy to reason about when thinking about numbers, not people. It does better when fewer people are in charge of spending and allocating resources to do the construction work. So, one environment is where conscious decision-making time is limited. When you need to make sweeping decisions, modular design is easy to see as complete. And completeness is a virtue when you are not directly in charge of the budget.

Now consider coding rules. Many rules we follow make sense 90% of the time. However, there’s almost always a reason to ignore them at some point. Otherwise, we would change the language outright. Instead, we must use our judgement. The benefit of modularity in buildings comes when you need to devote time to applying your good judgement to the design of hundreds of plots. You can use modules and leave the building to those with weaker judgement because you can trust they can’t do too much damage. Coding rules are there for when you lack the capacity to think about the virtues of many sections of code. You select a set of rules that embody the majority of your principles and lock the developers into producing code according to those rules. Now, you can at least be reasonably sure they won’t deliver a load of garbage if they follow them.

Pervasive modular design is quick to build, dead, and dull. However, it’s only dead to the inhabitants of the buildings. To the environment of countrywide housing planning groups, it’s a natural fit, relaxing two opposing forces of costs and time while resolving a homelessness crisis. It’s a beautiful pattern to the system of government spending. Coding standards align developers and bring about a common baseline of quality similarly. It’s a way of saving energy and lowering the number of decisions people need to make in a decision-making-heavy task.

Cost of uncertainty

In conclusion, modular elements are a theoretically good solution to the problem in the abstract domain of budgets and spending. It isn’t easy to obtain numbers that show the costs and benefits of non-modular solutions, so we promptly discard them as certainty is more comforting. Standard practice states you need an architect to design a house. If we tailor each build to the environment, then clearly, the price will spiral out of control. Again, when you think in theoretical terms, it’s obvious you can’t have bespoke housing for a million individuals. And yet, that’s precisely how the world worked for a very long time.

What about coding rules? In almost every programmer’s career, they will likely remember a time when they didn’t have them. Some will think back on that time fondly because of the freedom. Others will think about the suffering and the pointless bugs caused by not taking obvious steps to increase clarity. But, 20 years on in my career, I still decide on a per-project basis whether I will follow any rules or try another approach. Sometimes, I find no real benefit to a rule; I gain by avoiding it. By not following rules, I often also learn why we have them. By experiencing and gaining wisdom. Oh, painful, painful wisdom.

So, who’s benefiting from the coding rules? Whose costs are reduced and needs met by these rules? Usually coders, in a sense that they get work done faster, rework less, and can work with others with less friction. But the organisation benefits too. Coders will primarily get to do what they want regarding how they arrive at a solution, which means they aren’t affected by the system above like it is with the government benefiting modular design. The most durable rules are those that make coding less wasteful. In that sense, beauty is to be found in eliminating decisions and the time consumed when verifying them.

However, we built a million houses before modularity gained traction. They were constructed with attention as a million individuals made decisions. People had the time and invested it. This is a connection between coding rules and modular housing. Coding rules are there to limit the number of decisions we are required to make before we finish constructing. There are many more undecided points in a software project than in a physical construction project because the effort in a software project is 99% design and almost none in the construction. So, for me, flexible, living coding principles, idioms, and quality guides with examples of how and when you can ignore or adapt them are some of the most beautiful elements of modern software development.

Beauty in the product, not the process

Even though the code in DNA isn’t beautiful, its product is. The beauty of the product lies in the eye of the beholder. The beholder bases their metric for beauty on how much the product’s existence helps it succeed—the particular contribution is irrelevant, whether food, mate, or secure abode.

The quality of DNA is only visible once the organism executes its instructions. Only the outcome of growing as an unfolded sequence of decisions and actions on materials can reveal the true beauty. The product does not measure beauty directly. Implanted into organisms are heuristics for whether something is relevant to their success. These heuristics are the senses of beauty.

We may find the helix elegant to view and the processes of the cell captivating, but these are not beautiful in the sense we are looking for right now. I will explain this, and cakes, later.

DNA expands into an organism based on the organisation of the cell that initiates the process. The way a cell divides and changes behaviour based on proximity to other cells is a complex dance of interaction and contextual behaviour. Christopher Alexander called this step-by-step but analogue motion of change unfolding[NoO1-01]. Even when they aren’t biological, such as with buildings, the unfolding process can be applied by someone interested in starting small and designing outwards from a point. They can adapt, split, extend, and differentiate as the space reveals what they should do next.

The process of evolution affects the choices of the cells doing the unfolding, not often the unfolding itself. DNA code changes, merges, splits, and splices. How the cell replays that code into protein production is hardly ever impacted. Unfolding as a process—cells splitting and responding to their context—must also have evolved. But it’s fundamental. So, it behaves as part of the larger environment upon which evolution plays out. The unfolding process may seem natural and beautiful to us, but it’s uniform across the domain of almost all living things, so it cannot by itself confer advantage. When everyone is literate, literacy is not an advantage. However, the reason why we humans appear to have the upper hand among the animals is down to this very point. We have the power to elevate our unfoldings into new domains.

Beauty is a value judgement by survival machines. Applications need to be beautiful in this domain, so you often see people talk about an application being good or bad, and they can often point out very explicit aspects that prove their point. However, sometimes they drift away without knowing what they dislike about it. That can be the effect of a more profound sense of beauty. Our users will love our programs or hate them. They will prefer them to our competitors’ programs or not. But, as we learned earlier, code must be beautiful in a different domain. We get trapped in destructive code by trying to apply natural beauty value judgements. Remember, we can’t evaluate code by looking at it, as we work with different laws of physics.

Beautiful code won’t look pretty, but it will rapidly make beautiful products. It will create products faster and with a higher survival rate than the competition. That’s the basis. Not the process or how we store our decisions and progress but the ability to safely recombine, adjust or improve, or eradicate waste. Beautiful code is all about maintaining the development environment. No matter how aesthetically pleasing your code is, if the product is low-performing or not novel, capable, or quick to market, then the code is not beautiful.

You might think this means we can declare dirty code, spaghetti, balls of mud, and all sorts of wacky-looking codebases beautiful. And, in a way, that’s correct. However, a very low correlation exists between a coding environment producing good products and it being laborious to work in. Some rough-looking codebases are still around, delivering a valuable service in their niche, but how often do they change? How often do they need to change? Do they have competition? They are not apex predators; they are the earthworms of code.

Environmental fitness

I have a theory tying all this together. I don’t believe there is some luminous ground (Christopher Alexander’s term for the inherent beauty in forms), and I don’t believe determining the quality of things is a uniquely human pastime. I firmly believe quality is relative, but relative to something we cannot help but take for granted.

I am a staunch believer in the power inherent in the evolutionary process. Emergent complexity is natural and explains many more things than we even recognise as inexplicable. I see the impact of selection on television programs, CPU design, the rise of social media, and the origin of art. It’s not surprising to me that it impacts here, too. So much is affected by the selection process that to remain uninfluenced would be unusual. The revelation is not that it is affected by selection but how it relates so deeply.

Evolved species are survival machines. To survive, they must do better than others or at least not do worse than the majority. This selection weeds out those less capable of surviving by letting them die off. Remember, it’s not survival of the fittest but eradication of the unfit, so the system maintains variety. When you have a large variety, you have to build senses to detect the best for you in nuanced ways.

To explain my theory, I need to build a picture of what happens when you have a universe of selection and variety. I wish to introduce a thought experiment involving a species that lives on berries.

At first, the species could not distinguish a good berry from a bad one until long after they had eaten. They only knew the berry’s goodness by how much energy they had the next day. The species evolved when some developed the capacity to discern berry quality by taste. The tasters quickly outnumbered their tasteless ancestors. They evolved again when some began to discern by sight. After that, those who chose by sight were quickly outnumbered by those who could remember where they last saw some good berries.

The final evolutionary step happened when some of the species had inklings about where the best bushes might be. The feeling wasn’t magic; it was a genetic tendency to look in lightly wooded areas near water. It might have been a general preference for the look of trees, the sound of water, or the feel of the air. But when hunting for new berries, those with inklings were more likely to find berries than those relying on memory and hunting without knowing where to concentrate their efforts.

This is my theory of quality. Quality is an indicator of future value. Quality is an evolved inkling, a sense that picks up environmental indicators without committing to recognising them directly. We like seeing mountains and valleys, as they could provide protection. We fear the roar of a lion. We turn up our noses at the smell of rotten meat and salivate over the aroma of baked bread. These are all reactions to a potential future. None of these are reactions to the things affecting us directly.

I want to be very clear on this point. A lion is not a threat. A lion that is hungry and aware of us and how much meat we are made of is a threat because it intends to eat, and then the future is a lion ending our lives to satisfy its needs.

Putting our hand on a hot stove is not an immediate threat. Rationally speaking, our hand burning in the fire is not affecting our genome and only concerns what our genome produces: us. But we react to the fire because genes are instructions for assembling a being that can feel pain and respond to situations where its continued existence is threatened.

The sequence of instructions in DNA drives us to ensure the safety of the instructions in the DNA. It describes a process to create an entity that can tell what is favourable for it while conserving energy. That is quality. It is our reaction to things, driven by our configuration, which is itself driven by selection, which is in turn driven by the environment.

If selection defines quality, then quality is about survival. If being able to detect what is higher quality leads to survival, then why did Christopher Alexander’s paper strip experiment lead to a strong bias towards some patterns of black and white colours looking more coherent than the others? If patterns of black and white squares on strips never turned up as a fitness test in nature, then why do I think this bias is related to evolution and our genes?

I believe the strips are a mimic, like the many non-poisonous insects wearing false black and yellow stripes. The pattern recognition built into other creatures defends these less aggressive insects. Mimics don’t have to be about stripes or even animals. Mimicry is about producing the same phenomena as another source to reproduce beneficial behaviour in the beholder. Stick insects mimic leaves, and cakes mimic fruit. See, I told you I would explain cakes.

The paper-strip pattern harmony is an accident of overlapping domains. The strips of black and white patterns were an abstraction. The arrangements don’t signify anything, but our instincts still react to them and induce a feeling. Our inkling systems respond even when what we sense is not natural. We prefer things with symmetry because, in nature, asymmetry typically indicates damage or inferiority. We can trick our senses into having feelings about things that aren’t good for us, such as sweet, salty, or oily foods.

Before, you may have complained about my statement that DNA is not beautiful or that the process of molecular recombination is not pretty. I hope I have explained why it is not an honest beauty. It exploits our senses, tuned to a different purpose.

Environmental fitness affects more than just the real world. It affects any domain where there are variations and death in any form. Some people refer to this in terms of the market deciding the price. It’s what others will be talking about when it comes to creating artwork and having to kill your darlings. If something isn’t adequate, it will either be culled or not culling it diminishes your fitness.

When we talk about environments, one particular environment comes to mind for software engineering: the codebase. Who lives in the codebase? Does the computer or the compiler live there? To know whether something lives in an environment, you must first determine whether it can die. In the case of the compiler, it’s unlikely it’s inside the code environment. It might be within the larger setting of the organisation if there were a willingness to change the programming language or vendor. But realistically, it’s more environment than inhabitant.

We can delete the source code. So, it undoubtedly lives in the environment. But as in a natural setting, code is also the environment for other code. It makes the environment inhospitable to other code. Do people inhabit this environment? Some do. Their reputation and ability to get work done relies on its structure, so their career and ability to effect change in the environment are themselves impacted by the environment.

How this helps

There’s a lot going on in this process. We need to step back a bit and ask, how does this help us understand design patterns? First, we can recognise patterns as something akin to features of an evolved species. They are documented inklings. Second, we can use fitness to define whether patterns are high quality. Third, we know we can look for patterns in any environment, which means we need to recognise which environments we’re operating in.

We can assess whether a code configuration is healthy based on its fitness with the surrounding code, business practices, and tools. We know some developers will perceive quality differently due to their wisdom about a particular configuration’s impact. This wisdom indicates how well the developer understands the environment and its complexities.

But more usefully, we can now question the quality of the outer environment. Consider outer environments as compilers and tools, the operating system, company policies, or the hiring process. Any environment influences how we measure quality, whether in the code or the developers writing and maintaining it. The fitness of an environment affects its output and the evolution rate for the entities within. So, we can judge it. Anything we can judge can be removed, so we can find patterns here as well.

An environment competes against other environments when entities can migrate between them. You can’t quickly get a drink in the desert, but you can migrate to a wetter climate. You can’t always follow best practices at one organisation, but you can find a new job. You can’t write expressive code in your current language, so instead, you can write a DSL. If you’re aware of other environments, those environments now exist in a meta-environment and begin to jostle for fitness because an environment without entities is a dead one.

Positive only

Rather than inspect all patterns, including anti-patterns, let’s concern ourselves with how to find and use the best positive patterns to write better code. Finding them will be difficult because we do not have the right instincts to tell good code when we see it. Great or perfect code can look like a mess to our eyes. We cannot gauge it because it belongs to a world with different rules of survival.

None of our existing instincts apply, as those were developed in response to the rules of physics in our three-dimensional world plus time. The world of computer software has a very different number of dimensions and a very different set of laws of physics. Therefore, when I ask you what type of feature is attractive in core code, you must rely on cultural or rational evaluations.

We want an innate inkling system for good code and bad code, but we don’t have one. We almost have the opposite. Our inkling systems were developed to appraise the physical world, not that of software design. Our instincts prefer symmetry and repetition, which do not relate to quality in code.

Programs are not buildings, art, or Turkish rugs, but something unfamiliar. Programs are more like plans, laws, or recipes. Programming languages reflect our expectations or judgements. We need to find the properties supporting our goals.

The source of beauty

Organisms evolve a sense of beauty as a mutual adaptation present in the expression of the wielder and the appraisal of the beholder to promote survival. The earliest bees had limited concern for flowers, and the flowers didn’t prioritise bees either. Over time, their appraisal and expressions co-evolved into an unbreakable bond.

Our reaction to excellent code is weak. We need to accelerate the process of recognising its beauty. What will code look like in a million years if we consistently select code that keeps us safe and productive?

Most patterns will make sure any problematic code is removed or repaired. They would suggest code forms that are easy to mend or never need maintenance. The patterns would reinforce exponential growth and force multipliers, making further development easier or helping developers decide with confidence. They can be better processes, revealing the unknowns earlier in a series of development steps.

In this way, the Agile Manifesto[AM01] looks like a hypothesis for, or a description of, software development’s laws of physics. The manifesto is not a process but presents the metrics for selecting good processes.

Beautiful code

Code is more like prose, so it is less well-defined concerning survival. We react to bad poetry, but code is more like non-fiction. Its value comes from unambiguously revealing the answer to questions. Code needs good form, but we need to evaluate it in an alien way. One way to think about code is through our closest analogy in biology.

Business logic or gameplay code follows the same formal rules of beauty as DNA, as it constructs the final form. It must be simple to safely recombine, splice, and experiment with. Changes ought to affect the final form in well-defined ways—they must be safe and cheap. Experiments must be non-destructive and isolable. Our inkling should align with enabling confident, quick changes.

The metric of quality is different for core code, framework, platform, or engine code. We may wish to follow the same beauty rules as the cell construction and replication engine driving the unfolding process. These foundational components must have robust, error-free mechanisms and be extraordinarily reliable. Experiments in business logic can only happen on top of solid foundations. We should define goodness by the certainty of behaviour. Unit tests will rule and deny damaging changes from escaping into the world. This kind of code must have proof it works as intended, meaning all code of this type must document its intention.

Beautiful processes

DNA is an environment in an environment. It is a peak fitness form for life on this planet. But that only means DNA is a good environment and was better than the rest when it first appeared. We may never know what came before DNA, but with code, we can see multiple environments, languages, and development practices. One may emerge and dominate the way DNA dominates, but it’s hard even to name a leader today.

We could consider Agile to be leading at the organisation layer right now—or at least the real-world manifestation of the term. The one put forward by those who signed the Agile Manifesto is reasonably well defined, but the diluted term is faring better.

Agile, as it is, is winning as an environment in the environment, but it lacks a definite boundary in its current form. It’s hard to say what is Agile and what is not, which is itself a self-defence mechanism. This makes it difficult to differentiate it from the competition or even identify whether there are any competitors.

Good quality now, bad quality later

At the time of writing, there’s a general opinion that code quality comes down to a set of features. It often presupposes an object-oriented model, which is itself an interesting point. One version of this list might look like this:

  • Cohesive
  • Modular
  • Loosely coupled
  • Encapsulation
  • Separation of concerns

I feel these are secondary attributes; they are just our current judgements on what leads to good-quality code. I prefer to think in terms of how you might appraise these attributes. Think about a codebase that follows or doesn’t follow one of the principles, then ask yourself these questions:

  • How quickly can I debug code written like this?
  • What if I was not the author of the code?
  • How likely are changes going to conflict when working as a team?
  • Does this style of code reveal errors early or hide them?

And now, remembering that the quality of code is judged finally by the environment, we realise these metrics are a kind of fetish. Some developers worship them for their magical powers. We must define good quality by the properties of that which survives. The code’s quality must relate to whether it helps us or our organisation last a long time. That will likely mean metrics for whether it can be easily verified and maintained should hold more clout than whether it passes some objective structural analysis.

As an example, there was a recent post on Twitter (recent as I write this) about a Dutch government codebase for some medical software. It included a way to print out a percentage on some character-based output device1. The code is immediately readable but unexpectedly verbose for what it does. Any experienced developer would wince at it. However, it survived through whatever processes it went through to get there. So, is it good code? Objectively, it is, as it survived its environment, even if hardly anyone would be willing to claim they wrote it. But it says more about the environment than it does about the code.

Metrics for the future

SOLID is a collection of five principles for object-oriented software design, espoused by Robert C. Martin and first published in his 2000 paper, Design Principles and Design Patterns. They provide some difficult-to-calculate metrics, but they are metrics.

The first metric, the single responsibility principle (SRP), is usually interpreted as the quantity of code duplication. Most paradigms would agree in declaring this to be a useful indicator of quality code. Duplication or repetition is a positive feature in physical construction, but we can quickly rationalise why it’s bad in code.

The fourth principle, interface segregation, can be thought of as a measure of how much more API you reveal than necessary to your clients. Many other paradigms do not need this metric as they don’t naturally suggest over-advertising capabilities.

The open-closed, Liskov-substitution, and dependency-inversion principles each help us detect object-oriented code which is harder to maintain. We can apply these metrics to object-oriented code and declare it solid or wanting. In that sense, they are the metrics for now, while object-oriented design dominates the programming world. But what will be the metrics of our future?

A strong candidate is the use of immutable state to provide protection from the issues of concurrency as they become ever more relevant to regular programmers. Another might be data unencumbered from meaning, with principles taken from data-oriented design. Full traceability through documentation and process artefacts could surface as an accepted quality metric, as it has through ASPICE2. Others might consider data-flow testing to be a good metric for quality or at least hygienic code. Developers who use Rust or Python can already write test code in their documentation, so will that become a quality gate?

Should we start including metrics for quality outside the language, such as build times and response times to exploits? Given how connected the world is, can we accept code with a guaranteed turnaround time for exploit fixes measured in weeks? Some binding agreements already specify a response time.

All the things we think of now as good metrics might be because of our presently accepted paradigms. There are things we put up with now because that’s just how it is. In the future, we will see them as mistakes. We should know better already. Is there a way to move forward with these metrics faster? To find them before others and make better, higher quality code sooner than the competition?

2

ASPICE is the Automotive Software Process Improvement Capability dEtermination guideline and includes a lot of documentation on documentation.

Living things are not complete

There is a difference between a project being complete and a building being complete. I picked up this distinction from Christopher Alexander’s later works. He never says it directly, but the consistent recurring theme of generative production, repair, and improvement leads me to a further conclusion. The right approach to building never leads to something that is finished. We can complete a task or step of creation or repair, but no building that is made to be alive can be finished because change is inherent in a living and adaptable building.

Stewart Brand talks about this subject in the book and TV series How Buildings Learn[HBL94]. He references how long-lived buildings change at different shearing layers. Buildings change at multiple scales of size and time, from the location down to the arrangement of furniture. Buildings adapt at these scales, or they are razed. In effect, the buildings that prove unsuitable for adaptation are selected for culling. Buildings that cannot change to accommodate the needs of their residents will be removed.

The way I see it, the difference between living things and products is that products can be finished. A house or home is not a product; if it is, it won’t be a good one for future generations. A town is not a product either, and it would not be good if it were built like one. You only need to look at the usual problems with new towns in countries where old and new towns exist side by side, such as the problems caused by the suburbs in countries where relocated families now all depend on cars.

A product is used and enjoyed but ultimately kept as is, archived, cast off, or recycled. Generally, we don’t adapt products to new uses. I suppose this is why we recycle more than we upcycle or repurpose. We can mend a bucket, a pair of glasses, a car, a fence, or a shirt. But when something adapts to a new purpose, it’s visibly no longer what it was. A form that welcomes adaptation is a living form—a form where growth and change are part of its long life.

Living things are in a constant state of preparedness for adaptation. These are the qualities of a healthy code base, person, or building. The final product of a project is a codebase that can produce a product for the customer. We do not modify the products; we modify the design. We adjust the steps the compiler takes or the code that is interpreted. The living thing is the codebase, and only the codebase will be repaired or extended. The design is the structure and the materials out of which an instance of the product spontaneously materialises. Thus, the design of a codebase has two qualities: the capacity for change and the capacity to produce.

Buildings are inhabited by the builders while they are constructed, but they are later handed off to the inhabitants of the building. With source, the programmers inhabit the blueprints of the product, the building is built by compilers, and the users inhabit the deployment of the product. This strange mismatch reduces the efficacy of comparing software architecture and building architecture.

Resilient zombies

From what I have read of them, both Christopher Alexander and Helmut Leitner (of Pattern Theory[PT15]) appear to think artificial things can be made alive and living if and only if they are produced in a specific way. A living thing can emerge from successive alterations and generative processes at the most local and personal scale. They argue that the aggregate power of organisations or wealthy individuals requires pattern theory to guide the short-interval or iterative creation of living structures by applying these generative processes; otherwise, they fall foul of dead design.

Unfortunately, I don’t believe you can simply explain to money how it should behave. You must find a way to join pattern theory to something powerful organisations already value before they entertain such ideas. Wealthy individuals will want status symbol constructions, but by their very demand, they deny living structure. A well-used and loved old farmhouse is full of life, but it’s not of any value to someone looking to build a status symbol. Making sure your skyscraper is ready to be converted into something new and readily adaptable to changing circumstances doesn’t sound like something a business cares about. Investing in being easy to repair and adapt to changes in the surrounding environment is a far-sighted move few in a position of power would exercise or even give a moment of thought.

I doubt living buildings and living codebases are possible when you have egos and status involved. The best code is written by many people over time, all working without ego, spotting each other’s mistakes and collaborating on large-scale migrations from one good equilibrium to another.

Fundamental Properties

The journey started with Notes on the Synthesis of Form[Notes64]. The book showed a process to inhibit complexity when developing architectural solutions. It reduced the task of constructing a village from impossible to a conceivably possible project. But, the most significant outcome of the work was not the development process but the concept of patterns. They pervaded his subsequent books and the software industry.

However, the development of the pattern language process revealed something else. Christopher Alexander stated that, even when publishing A Pattern Language[APL77], he was aware of somewhere between 121 and 162 of these fundamental properties of forms. They kept appearing in the patterns of which he was most certain.

His most extensive work by far, the four-volume essay, The Nature of Order[NoO1-01], contains much on the concept of properties. It’s the main thrust of the essay, discussing their existence, value, interpretation, and limits, although not directly announced as such.

Why are the fundamental properties missing from so much of the available material on software design patterns? To understand the answer, you first need to understand the fundamental properties of forms and how they work.

Before discussing the properties, I need to state more precisely what the properties are of. They are properties of the forms of compound objects. They are properties found in patterns of architecture because buildings are compound objects. They are found in other things too, but Christopher Alexander mostly talked about these properties as being present in nature in living things, in the construction of buildings, furniture, and other objects with which we live and enrich our lives, such as teapots and cups, decorations and paintings[NoO1-01], and also, and in great depth, the designs of Turkish rugs[AFo21CA93].

The Nature of Order first describes the properties and later redescribes them relative to nature. Only in volume two[NoO2-02] do we find a discussion on how they come about. The transitions between forms with good properties are described therein as structure-preserving transformations.

This transformation aspect is the process side of the process and a thing of the properties of forms. Given how strongly I feel about how contexts and forces should drive patterns, I also believe the properties of forms should have a description starting from the forces that would lead to their presence. So that’s what I’ve attempted to do.

Rather than, once more, recite the properties as solutions, I attempt to produce a novel summary. Starting with what stresses would induce a need to take the structure-preserving transformation, I present the transition and what the form looks like both before and after. Just as a pattern should start from a position of an unresolved problem and move toward a solution, these properties of form follow a similar relaxing path. However, not all properties of forms are transformations, so in some instances the description has to fall back to explaining what they are and how they come about.

If you already know the properties and are not interested in this different approach to describing them, consider skipping to the section on colour, where we’ll get into the reason why these fundamental properties didn’t land for software engineering.

1

[Grabow83], p. 200.

2

[NoO1-01] p. 242.

Property 1 — Levels of Scale

Levels of Scale is introduced in The Nature of Order, Book 1[NoO1-01] on page 145.

As elements in a design grow in size or gain sub-elements, how those elements relate to each other can have varying levels of potency or relevance. What’s notable is that sub-elements of a superior structure tend not to be extreme in this variation.

When an identifiable design element has become too large, it can need splitting. When the shape of a design starts to become too complicated, it can be better to make it a compound of smaller parts. You can turn a more prominent or complex element into several smaller components without weakening its identity in each case. The new features will be healthier when they maintain excellent scale ratios with their neighbours.

A good design often has multiple levels of scale within the hierarchy or web of interconnected elements. These levels must be well proportioned. The scale relationship is between neighbouring sub-elements in the larger design or direct descendants in a hierarchy.

When the subdivision occurs, some sub-elements will strongly support the meaning in the outer element. Some will help it less, even though they are crucial, because not attending to these differences in relevance and scale can lead to designs that feel off-balance.

When designing characters for computer games, animations or graphic novels, characters may look drab if they are too regimented in their details. The vibrance and sense of life immediately increase in the example, where the sizes of each defined part differ but not by too much.

Character design benefits from levels of scale Character design benefits from levels of scale (Image courtesy of Johanna Taylor https://johannamation.com)

We often see these cascading proportions in older buildings, furniture, or artwork, but not so much in modern creations. For example, look at a modern bookshop or library. You may find all the square spaces are the same size. Modular and regular with no variation of scale. The ordering of the books is more important than goodness of fit.

This property is generally missing from modular design due to the inherent nature of mass production. Modules tend to come in a small number of standard sizes and fit together in columns, rows, and grids. It’s efficient in terms of time to think and come up with a design, but not always an efficient use of space.

Regular shelves leave too much space Standard-sized shelves leave quite a bit of space and are too small for some large books.

Observe the furnishings in an older library or house and you will see shelves of different heights and widths. Even the ubiquitous Swedish furniture item, the Billy bookshelf, has variation in scale with its alcoves. There’s a practical reason for this: books themselves are different sizes. Levels-of-scale has value in simply allowing things to conform when their scale is relatively similar.

I appreciate this observation. Accommodation for all does not mean dropping to the least common denominator but instead offering alternatives. My bookshelf uses a modern modular shelf design, but the modules have the local flexibility I need. I can accommodate all sorts of different book sizes and the occasional radio-controlled vehicle or pocket electronic device.

Irregular shelves I love My bookshelves are different sizes, packing in various books, toys, and electronics.

The well-performing scale proportions all roughly fall within a set range. Christopher Alexander states the most comfortable scale ratios are between 2:1 and 4:1 at the greatest[NoO1-01]. Think about the sizes of rooms in your home. Nothing is 20 times as big as its neighbour. I also think there’s a lower level to these levels, something like a 3:4 ratio. These pleasant ratios must be pronounced differences. If the difference is only slight, it does not appear to be an intentional effect, just an irregularity. If it’s small enough, and repeated throughout the form, say less than 1% difference, it’s a sense of flexibility in the realisation. Perhaps it is just me, but at ratios of between 4:5 and 7:8, the differences look like mistakes.

When things don’t have these ratios, they can feel a little inhuman, such as when there are giant single-pane windows with thin frames, it can feel like you are in a goldfish bowl. You can be alienated from the outside, even though you literally see more of it than with a smaller, better-proportioned window. When you see small fasteners on large pieces of wood, it can feel like it’s there as a temporary measure while work progresses. Things feel unfinished.

Large pair of windows, alienating from the outside Somehow alienating, even though there is so little keeping you separate.

In a form with comfortable levels of scale, you will often see substantial borders or things framed in gentle steps of magnitude. You see it in buildings with dado rails and skirting boards. When the scale is too extreme or not significant enough, it looks strange. We also see it in books. Sometimes, the text or headings are the wrong size for the script beneath. Images might be the wrong size for the layout of a page or the quantity of text it shares the space with. However, a well-constructed page can be a pleasure to view, even if you cannot understand the language of the text.

Levels of scale is the property of good proportions for different elements in the same compound. You can find good and bad examples in font design, website and page layout, cars, and consumer product design.

Sometimes, scale helps identify an area as separate. When enclosed within a larger space, a scale difference provides further meaning to a division. A couch or half wall placed well in a room can divide it so that we give each side meaning—living and leisure against eating. One is a world of softness and relaxation, and the other is a world of utility, quick to clean and a space where we prepare for duty. If these spaces were the same size, if the wall were directly in the middle, both areas would feel wrong. One takes precedence in your life, so reflect that with which is given more of the volume.

Property 2 — Strong Centres

Strong Centres is introduced in The Nature of Order, Book 1[NoO1-01] on page 151.

All designs need an identity, a way to say what constitutes the design and what does not. Where the design ends and the rest of the world begins. But not all these beginnings and endings need to be definite. Boundaries and borders also delineate the form, but identifiable elements and parts of a whole design do not need literal lines around them.

When a design seems to have a weak form, it could be the centres are not as strong as they need to be. Centres are always already present in a form, so this is a property which is not introduced as a new element but one where the strength of the centres are developed. Occasionally, recognisable forms are waiting in the wings but cannot emerge without support from neighbouring elements. But defining weak centres is hard, so I shall describe the strong ones first.

The property of strong centres is not so much a resolution of forces but the basis upon which everything else rests. It’s the atom of wholesome forms. Except, it’s not atomic because atomic means indivisible, and strong centres are frequently divided or overlapping as you work your way down the hierarchy.

Overlapping centres The identifiable parts of an object often overlap.

The overall compound comprises a set of cohesive, identifiable elements containing recursively more cohesive elements. But, just as people belong to many different groups, so can some design elements manifest as parts of various centres in the same compound. These centres are the discernible elements. If you can see the shape somehow, even by negative space, then it can be a centre.

Overlapping larger centres The parts of an object can be compound, formed from the negative space,
or simply an emergent shape of smaller details.

A strong composite form has to be a harmonious collection of centres. Each sub-form should be a complete and unfolded element in its own right. Strong centres overlap, and the overlapping is itself often a centre, but when things intrude rather than support, they break down. If you can’t effortlessly draw lines around things and characterise them as separate elements, then the form might not have strong centres. In a good design, everything should be a centre, and a strong one at that.

Overlapping smaller centres Each level of scale contains other strong centres.

Strong centres are those reinforced by the other centres around them, supported by multiple aspects simultaneously where possible. They are part of a sequence of things, a good relative size, part of a border, or the vital quietness offsetting the noise of a neighbouring compound. Strong centres, whatever they are, are secure, identifiable, and stable.

You can use weaker centres to emphasise a strong centre, but weaker does not mean weak. Weak elements might be unbalanced or skewed, look different or appear original for no obvious reason, or some other weaknesses of relationship with their neighbours. Some centres can be allowed to be weaker because they are the elements at the edge where they have to interact directly with elements from a different outer compound. Others might be strong locally but perhaps have been asked to give up some strength as part of a roughening process to relax a more critical centre.

Strong centres are reinforced by other centres, from within or without. Weak centres are those that don’t have good relationships or are ill-shaped. Now, let’s look at an example of weaker centres.

Weak Centres Many centres, but most are weak or not supportive of their neighbours.

Many centres in this design are weak or weakened by their neighbours. I adjusted three aspects to show how small the changes need to be.

  • The towers with their straight sides don’t lend any weight to the structure, so they weaken the whole castle.
  • The straight sides also make the structure look like it’s failed to be symmetrical rather than succeeded in having a bookended design.
  • The placement of the windows precisely halfway up the towers makes a false symmetry where none is required and creates equal space above and below, but it’s not completed with an equal-sized window, so it looks awkward.
  • The caps on the towers touch the top of the main body, not only detracting from their purpose in the design, but also weakening the main body by giving it two more sides without literal corners to define them.
  • With the caps touching the top of the main body, the space beneath the outside edge doesn’t feel consistent with the interior.

If you can detect these weaknesses, you can strengthen the complete design by improving the smaller centres. Each act need only be small to create a significant effect. I detected five failures affecting over seven centres, all from only intentionally damaging three elements. What do you see as weak in my attempt at a strong-centred design? What latent stronger centres can you bring out of the castle image?

Property 3 — Boundaries

Boundaries is introduced in The Nature of Order, Book 1[NoO1-01] on page 158.

As elements of a form become more detailed, new elements grow within them. Forms increase in complexity by seeding from within. This naturally leads to some of these inner forms having very large boundaries. As the forms grow, their boundaries do not deteriorate, for they are part of why the inner form is successful. Instead, these large boundaries are reinforced and remain part of the overall compound.

There are always edges to things: where they end and the rest of their world begins. When objects have an inside and an outside, there is a literal boundary. These boundaries can be strong centres and have significant presence and scale.

When a contrast is jarring rather than complementary, it weakens the centres involved. To protect the centres from discord, we can introduce a boundary that works well with both sides of the divide.

Some boundaries can be massive, almost as large as the thing they are bounding[NoO1-01]. They are not a thin line between things but can be thick like a moat or the margins of a book. They can be detailed, such as the dunes dividing a beach from the lands beyond or a brick wall surrounding the mouth of a well. They can be plain, such as the rim of a plate or the grounds around a building on a lawned campus.

They can be an absence, such as the solid material around the holes of a shower head. The boundary in the following image is not the thicker black line but the blank white gap between the outline and the detailed interior.

Boundary of nothing A boundary can be a conspicuous absence.

The value of boundaries can be how they extend the meaning of the separation while enhancing the visibility of the centre they surround. Boundaries can be around objects but also at the ends of linear things, such as columns or ropes.

The principle of boundaries is that they reinforce endings and divisions. In things that stop, they help recognise the termination and the preceding existence. Boundaries fulfil the need to say something about the differences between things while recognising the value in their presence.

Without boundaries, colours merge, and ends seem unfinished. Weak contrasts start to show as a failure to differentiate rather than an attempt to include.

Boundaries are places for details to be added without making space for them artificially. Fences or colonnades, patterns around a form, details in a design surrounding an emptiness; they all make good boundaries that reinforce and contrast.

I believe a boundary helps a contrast fit with its surroundings. If something is too dark, detailed, or colourful, a good boundary can provide the distance to smooth the transition. I have edited the image of the door on the right to show how it looks weaker and out of place when it lacks a border.

Christopher Alexander used an example of a Gothic door. I replicate the example here with a different image of a door and an alteration to remove the surround, showing the impact of not having the necessary framing.

Boundary to make a contrast fit A boundary can help a contrast fit.

Boundaries can be cell walls or rivers used as borders of countries. Boundaries protect a centre from the world outside. So, a line is not a boundary. Because without thickness, a boundary cannot protect.


So, although hairlines can indicate the location of a boundary, the true centre is the boundary extending to the white space above and below the line. The boundary is the whole form, the gap-line-gap sequence.

In books, some publishers aren’t aware of the importance of margins and push the text of a book out to the edges of the pages. Leaving a tiny gap for a border feels like a poorly framed centre on every page.

When I first learnt about typography and book layout, I did not understand why the inner margin can often be smaller than the outer, but now I see it as a way to emphasise the boundary around the two-page spread. The inner margin handles the fold and not much else. The outer margin is a companion to the top and bottom margin, not the inner. Boundaries are, therefore, contextual to where they are embedded.

Property 4 — Alternating Repetition

Alternating Repetition is introduced in The Nature of Order, Book 1[NoO1-01] on page 165.

Square grids are a typical result of splitting sequences from other splittings along a second dimension. They create rectangular grids that feel a little too flexible when realised.

Repetition is natural, and grids are common, but even more natural than these are where the grid has relaxed into something where the tiniest irregularities are de-stressed by being aligned to an off-beat. Consider the difference between a rectangular grid and a hexagonal grid. One is easy to work with, but the other is more satisfying to behold.

There are no triangles in square grids, so they are just waiting to fold up on themselves. However, the hexagon or triangular field is sturdy, or at least looks that way. Hexagons remind us of stacks of circles, which pack naturally into alternating layers. Triangles are triangles, and we immediately see them as sturdy.

Hex and Square Hexagon grid and square grid.

Bricks are rectangles, so they look terribly fragile when laid out in a grid, but note how swiftly we change our mind when we layer them as per a wall with alternating placement. Even when a regular grid is pristine and well-engineered, it looks weak and unsettled compared to a roughly realised alternating form.

Brick walls Two brick walls, one regular, one alternating.

Scales on a fish[NoO1-01], the way the weave of fabric goes under and over, the teeth of a zipper, the in and out of jigsaw puzzle pieces, and the directions of alternating tiles in herringbone paving, are all examples of the same kind of strong pattern of interleaving by alternating repetition.

The alternation does not just need to be about structure but can also be about usage. We build small shops into our communities by interleaving commercial and residential. We plan out land with stopping and going as the alternation, with paths punctuated by points where things come together. We string focal points[APL77] such as monuments, ponds, and playgrounds together with paving, roads, and dirt tracks—an alternating pattern of movement and stillness in relatively similar quantities.

Property 5 — Positive Space

Positive Space is introduced in The Nature of Order, Book 1[NoO1-01] on page 173.

We must introduce positive space when we develop for a while and the foreground becomes much stronger than the backdrop. We don’t always need to weaken what we have done to remedy the situation. Positive space means to look at the gaps our work has created and see where they are awkward or need some support. You can think about positive space as relating to kerning. It does not matter how good the letterforms are without good spacing between characters. Poorly kerned text looks odd.

Kerning Times New Roman, kerned properly above but not below.

When something unfolds, it does so within some other medium or environment. The change introduced by its presence can help or harm. The concept of positive space suggests the best results are when the unfolding also considers the goodness of the negative space. So, the patch of ground left behind by the placement of buildings must be good as well. The shape of the empty air around an object should be as inviting as that displacing it.

We find examples of how this can be done well in the placement of buildings around a cosy village green or in a nicely designed shopping mall. The place to which people go when they are not shopping is as important as the shops to the feeling of it being a good place. In an image, we see it in how the object of the picture leaves a pleasingly shaped gap around it.

It's why we most often prefer justified text to aligned text, by which the end of the line will leave a poor negative space. In the same vein, it's why centred body text hardly ever looks good.

Some shapes seem like they should have good form (as we shall see in the following property) but have a tough time providing positive space. Circles are notorious for this[NoO1-01]; the lack of positive space is the reason organic shapes in buildings seem alien compared to squared-off corners and straight lines. Rounded shapes always leave a problematic concavity in the space they take up. Modern buildings with circular structures will have line-of-sight problems or unused space issues. Even the aforementioned circle-packing from alternating repetition sees this problem, as the spaces between circles have an extremely concave nature. To resolve this, we blow out the spaces in some way to make them better centres.

Gaps Gaps between circles have terrible shape, so open them up to solid triangles.

Circle packing is all about the circles. The gaps look like the leftover cardboard from punching out the tokens of a board game. They are waste. They are unusable. Without positive space, elements stick out, causing others to recede or weaken. The overall compound is reduced. The form as a whole is less effective for the sake of something that does not respect its neighbours.

Property 6 — Good Shape

Good Shape is introduced in The Nature of Order, Book 1[NoO1-01] on page 179.

Centres can become overextended or lose their simplicity due to pressures from outside or in. The spaces between the circles were an example of external pressure. Internal factors include when a design does not emerge from a simple process. Usually this property is not directly introduced by transformation, but instead it acts as a guard against a transformation which would bring about its absence.

This property is hard to comprehend, but my present interpretation of it is a wholesome and self-contained shape that is easy to consume. It is a way of saying that any element should be simple. Any complexity should present itself as a combination of things at a different scale, not as a convoluted shape or structure of a single piece. A form should be unified, consistent, and of one idea.

Sci-fi car shapes The shape of a sci-fi car is self-inconsistent and complex.

For example, the absurd-looking forms of sci-fi vehicles are not good shapes. Where the shape is not formed in response to the requirements of smaller pieces but instead whole cloth from the designer’s imagination, they are often inconsistent and complex. No construction takes place, so no lower-level order constrains the form.

Hand-drawn fantasy maps also have bad shape. They are generally unrealistic, as they have a forced shape from the author’s requirements. Their design doesn’t come from the natural laws of geography, so we can immediately tell they are fake through our instincts. Even the best random generators are detectable because the world has not been alive and there is no history to the location of the elements.

Fantasy Map Fantasy maps give themselves away.
(generated at https://azgaar.github.io/Fantasy-Map-Generator/)

Christopher Alexander offers this list of qualities to help detect a good shape without having a feel for it:

  1. High degree of internal symmetries.
  2. Bilateral symmetry (almost always).
  3. A well-marked center (not necessarily at the geometric middle).
  4. The spaces it creates next to it are also positive (positive space).
  5. It is very strongly distinct from what surrounds it.
  6. It is relatively compact (i.e., not very different in overall outline from something between 1:1 and 1:2 — exceptions may go as high as 1:4, but almost never higher).
  7. It has closure, a feeling of being closed and complete.

— Christopher Alexander, The Nature of Order, Book 1[NoO1-01], p. 183.

An example he points out is that of a circle1. It would seem to be a good shape, but it fails in so many cases. Not because it’s a weak form in itself but because it creates unhealthy negative spaces around it, breaching point 4.

Good shape often goes unnoticed because the forms are not tense or distracting. We are more tuned to detect that which is uncomfortable, asymmetric, or poorly placed in its environment. Still, we have good forms with good shape around us much of the time.

Christopher Alexander found many examples of the properties in Turkish rugs. Good shape shows up as the elements but also in the shape of the groupings of them. Leaf motifs, frames, details, and branches all have good shape. But even in a good example, not all centres can maintain good shape without great care and attention. For example, look at the negative space around the detailed sections.

Good Shapes

— Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 1[NoO1-01], p. 182.

Many good shapes are interwoven in this rug

In all, good shape describes a quality of things that have unfolded well and have not made things worse for their neighbours. It might only occasionally be an ‘unfoldable’ property of its own, but it is a property that repeats in good things.

So, rather than use it as an unfolding to resolve a tension, use it as a guide. Detect newly arriving tension or detect when a centre needs to be split into levels of scale, a centre and boundary, or some other compound.

1

[NoO1-01] page 183

Property 7 — Local Symmetries

Local Symmetries is introduced in The Nature of Order, Book 1[NoO1-01] on page 186.

This property refers to not just symmetry but overlapping and interweaving symmetries. Snowflakes and other self-similar shapes have something like this, but local symmetries do not require a fractal nature. Instead, think only of the idea that the repetition is a repetition of something made up of something with repetitions.

Rather than start with something lacking symmetries, this property emerges from how unfolding processes operate. We don’t often repair a form back to having local symmetries, as they naturally arise from splitting, repeating, or using symmetry to reinforce structures.

Partitioning and duplicating while mirroring or adjusting based on steps leads to this hierarchical repetition:

AA BBB AAA BBB AA CCCC AA BBB AAA BBB AA

A strong pattern is present there, but it’s not entirely self-similar. We needn’t be too strict about the facets either. This is cadence, poetic flow, and verse structure as much as mirroring or symmetry. Pleasant songs tend to have repetition at all different scales, and poems will have self-referential structural symmetry. Although a book of prose is unlikely to have much local symmetry, the left–right of the pages maintains at least a simple local continuity.

A terraced street will have symmetries[NoO1-01] in the row of houses. Even if each home has been made individual by its inhabitants, the alternating left–right door placement reinforces local symmetry. Other, smaller symmetries strengthen it further. The placement of windows and even the windowpanes and roof tiles deepen the character of the houses. It all adds up to a pleasing sequence along the whole street. The same is often true of any main market street with symmetric arrangements of roads, shops, frontages, and arcades with columns. Even the cobbles or paving slabs add to local symmetry. When you last walked along a solid and continuous poured concrete walkway, how did it feel compared to paving slabs or cobbles?

This property was proven in a series of abstract tests often referred to as the paper strip experiment. The 35 strips had all the possible permutations of three black squares on a strip seven squares long. Participants in the study were tasked with different tests to find an order to the strips according to how orderly, coherent, or simple they were. Though the test results differed, they were statistically similar.

Christopher Alexander collected the results and spent much time attempting to find the rationalisation for the order. Inspiration struck after some years when he considered local symmetries. He had been stuck on the idea of clumps and overall symmetry and had not considered how vital the sub-symmetries were. Calculating the score of the strips counting every symmetry, even those where the colours do not change, suddenly showed a very strong correlation to the ordering.

When biasing for longer, larger symmetric sequences, it seemed the correlation worsened. It proved to him that the number of symmetries was the most significant, not their size. The symmetries and sub-symmetries are the significant parts of the property.

The strip experiment and local symmetries in general can be considered to suggest the property is fundamentally about information density. Good forms are easily recognised. Ease of recognition comes from having patterns that match our preconceptions. Local symmetries give us something to latch onto. We can take one element of a scene and project it onto another as only a small piece of information—the duplication or mirroring of another element. We pattern match all the time, and the less pattern we can match the greater the cognitive load when trying to understand the whole—the more individual elements we have to concentrate on as we think about the compound form.

Paper strips

— Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 1[NoO1-01], p. 189.

This parallels what we consider to be a natural unfolding. DNA works by a small number of rules, so there will be symmetries as every non-symmetric element is information and there’s only so much information in DNA. Our sense of what is good will be tuned to look for things with this quality of similarity of element.

When we see a lot of detail with many different ideas it either captivates through curiosity or repulses us with its vulgarity. Almost everything we consider beautiful is simple in this respect. Therefore, wholesome forms present otherwise complicated looking forms through repetition with any variation only present when it is of vital importance.

Property 8 — Deep Interlock and Ambiguity

Deep Interlock and Ambiguity is introduced in The Nature of Order, Book 1[NoO1-01] on page 195.

Interlock can be introduced when boundaries would be too isolating and lacking differentiation would muddle the centres into breaking with good shape. Where a transition between centres might be too ambiguous with a gradient, instead we can introduce an interlock of different unique elements.

The elements creating an interlock will share something in common. A porch or arcade[NoO1-01] is part of the outside world but also part of the building to which it is attached. A puzzle piece is connected to another by staking claim to that which is within the boundary of its neighbour but somehow also definitely its own. Other examples are more literal, such as the interlocking of joints in wood or foreground–background ambiguity.

Dovetails Interlocking wood beams. Neither is subordinate to the other.

Interlock makes it difficult to determine which element was adjusted to make the connection work. When we cut stone so it fits together, depending on what we remove and from which stone, it can be difficult to determine which was fit to the other. Ambiguity creates a pleasant, tangled sensation, a knot rather than a mess. In examples of interlocking where we lack ambiguity, the knotting is less pleasing. It feels more like one element takes centre stage.

A deep interlock respects the positive space of the thing giving way. We can see a negative example of this in the interlock of Inca (probably not actually Inca, but it’s referenced as such, so I will call it that here) stonework. The central stone (it’s famous, and known as the twelve-angled stone) dominates the shape of the stones beneath it, but it is cut for the benefit of the smaller stones above. They did not create a positive space for the larger stone, so they are interlocked, but not as pleasant as they could have been had they worked together to create a harmonious shape.

Interlocked stone The cuts are not ambiguous. Lower stones always make way for the upper.

More harmonious interlock comes when there is a bi-directional commitment to the locking. The tiling lizards of M. C. Escher deeply interlock, so no lizard gives way to any other. They wholly interlock with each other. However, this interlock has little ambiguity.

Ambiguity is more like the kind where the background and foreground are uncertain, such as in an octagon and square tiling. The smaller squares could be the foreground or the background. However, this pattern is not strongly interlocking.

Many examples exist in the patterns of wallcoverings or textiles. Christopher Alexander’s collection of Turkish rugs, with their interlocking designs, had many excellent examples of this feature. You can spot deep interlock almost anywhere by staying alert for these overlapping centres.

Interlocking shapes Interlocking shapes. Which is the foreground, which is the back?

An ambiguous interlocking form is one where the interlock is deep but not driven by one or the other aspect. As an example, an anchoring pattern where the anchors reach into the space of the other but also define the opposite shape, which is also the same. The interlock is clear, but which is locking the other is ambiguous. The same can be said of a three-circle knot.

When elements stand too far apart from their environment, not literally but as contrast without connecting, an interlock can help introduce them and make them part of the world. The covered paving outside a mall is both the outside and the mall and helps create an introduction, planting it better in its space. We can also actively avoid ambiguity and create a moment of hesitation. Notice how most service entrances open directly to the work environment and evoke a distinct split. The shock of the split reinforces how the public should not enter this way.

Property 9 — Contrast

Contrast is introduced in The Nature of Order, Book 1[NoO1-01] on page 200.

Centres can only really exist if they are differentiable. Wholesome objects are identifiable. An element is weakened without some boundary defining where it starts or ends.

If an element is not distinct from its neighbour, it will be seen as part of a larger whole, including that neighbour, weakening both and impacting the good shape. An element reduced this way recedes into a supporting role, becoming coupled and subordinate.

If this reduction is not wanted, if the cost of this diminution is too great, we have a force to resolve[NoO1-01]. If a boundary does not make sense, another option is available in the property of contrast. Contrast is a form of visible differentiation and enhances identity.

Contrast does not have to be colour but can also be texture, size, pattern, or anything we can sense. Borders must contrast with both the centre they bound and the surrounding space. I have a beautiful black bowl with a small detailed section at the rim. The detail contrasts with the plain matt body. The interior of the bowl has a glossy finish, contrasting to the matt exterior. Two different contrasts in the same object.

In The Nature of Order, Book 1[NoO1-01], on page 200, the prose lists some contrasts, including:

  • light–dark
  • empty–full
  • solid–void
  • busy–silent
  • red–green and blue–yellow
  • high–low
  • soft–hard
  • rough–smooth
  • active–passive

Opposites are anything juxtaposing, such as thin wires or rods against a thick trunk of a structure, a matt surface against a mirror-like polish, and angular design against gentle curves. The specific aspect that is in opposition isn’t critical, but it’s imperative that the contrast differentiates the elements, helping to strengthen them using qualities taken to their extremes.

But not all contrasts are helpful. When a larger body sets a style or motif, incorporated elements must work hard to fit in when they break from the theme. Other times, the contrast is contrived; it becomes the central element, diminishing the truer centres[NoO1-01]. Consider how nothing stands out in films these days, as so many have an orange and teal colour grading. If everything is in contrast, nothing is.

So, although contrasts are a valuable tool in creating identity, you should not seek them out. Find ways to ensure they exist where needed, but not in their own right or without reason.

Property 10 — Gradients

Gradients is introduced in The Nature of Order, Book 1[NoO1-01] on page 205.

It’s not often we introduce a gradient directly, but instead we watch it emerge from how we transform the elements. However, you can plan for it if you’re aware of contradictory forces in your context. Rather than make a contrast, bring a gradient to make the transition smoother.

As forms grow in size and detail, some composition elements will seem different and gravitate toward each other while moving away from elements unlike them. In this way, a gradient will form. We see it in cultural groupings with those bridging divides, and how pebbles wash along a beach.

Other times, gradients appear when the whole is large enough to span such a distance that it interacts differently at each extreme of its presence. At one end, the set of values contrasts differently than at the other. The differing forces create tension across the whole centre, and this tension is resolved by creating subdivisions or introducing a gradient.

Steps and stairs, sizes and colours, and gradients of elements are typical of good forms. Sometimes, they are literal gradients, such as in a rising elevation. The elements across a garden form a gradient from the low, rough areas to a higher ground of beauty or elegance. The gradient needs only to be through a space. As you travel through it, some aspect must change.

Consider the gradient-like nature of 112 Entrance transition. Standing in front of a doorway, you are in a public space. As soon as you enter the porch, you are beginning to be inside. Beyond this short-lived gradient, there is another pattern interlocked with it. The hall is even more inviting, and the deeper you venture into the building, the more home-like it feels—an 127 Intimacy gradient. Neither gradient is a sliding scale or gradual slope, but a feeling that grows or shrinks depending on how far removed you are from the outside world.

In organisations with security concerns, you might also encounter the secrecy gradient—a confidence gradient. The further into a building you get, the more locked down it is, and the more sensitive information will be present. Going from the front desk to a manager’s office to their filing cabinet requires progressively deeper acknowledgement of permission and authority.

Detail gradient A fire poker has more and more details towards the handle end.

Gradients can also affect the density of designs. Sometimes, the detail increases because the end requires some differentiation to achieve its goals. Tools often have detailed work ends or handles for this reason, and bones have the same increase in detail at their ends. However, the concentration of detail need not be discrete like in levels of scale. It can slowly change across a dimension, such as the spots and stripes on animals.

Reaction Diffusion Reaction diffusion can increase detail gradually, akin to the spots and stripes on an animal.

Most modern methods find realising good gradients quite demanding. Modular design works best when there are no gradual changes, so natural feeling gradients become expensive and regressive against any value this approach provides. We should not devalue gradients; instead, we should point out how modularity inhibits this property from ever appearing.

Property 11 — Roughness

Roughness is introduced in The Nature of Order, Book 1[NoO1-01] on page 210.

Natural things will look better when they are not too pristine and perfect. Not out of some hearkening to a time long gone, but because perfection creates a difficult time for the adaptation required for the best possible fit.

When your repetitions are precise, or your lines are no thicker than a hair, you open yourself up to cascading failure and complexity of interaction. A 5mm movement in one piece will require every other part to make space for the change. We want to reduce complexity and keep things simple. So, to allow local changes to happen without disturbing others, we introduce and expect an amount of roughness, so we can work around any imperfections we encounter.

In The Nature of Order, Alexander states that this makes them ‘more precise, not less so’ which I interpret as the opportunity to adapt granting the forms better capacity to properly reinforce their neighbours and not be disturbed by their irregularity.

If a grid is made of fragile modular pieces that cannot withstand bending or cutting, then the overall structure will be only as flexible as those modules. When you see a large or long tent at an outdoor event, it follows the contour of the grounds or uses ropes that take up any slack. Look at a brick wall on a rising street and see how the bricks help because they are cuttable.

If you can’t bend things a little, the final form will seem awkward or impinge on its surroundings. Roughness makes the slight movement and variations normal. When you expect deviation, adjustment is natural and fitting. Then, you get the deeply interlocked, tight, snug but relaxed final forms.

Without roughness, a shared edge feels off. Without roughness, a picture feels unreal. Even a brick wall, with perfectly laid bricks, still has a roughness of the brick surfaces. You may have been in a building with wall coverings imitating bricks and noticed they didn’t look right. A linoleum floor which emulates tiles can look tacky when they are all too perfectly placed. These perfect versions are not better. They are inhuman. They suck the feeling out of the space.

Fitting bricks The perfection is uneasy.

Note that the perfectly interlocking blocks of the Inca stonework walls mentioned in the deep interlock section do not have a feeling of genuine roughness. They are too perfectly placed. They took too much effort. The dry stone walls of an English countryside might be too loose, but something in the middle is the goal.

Property 12 — Echoes

Echoes is introduced in The Nature of Order, Book 1[NoO1-01] on page 218.

Echoes, or self-similarity, are similar to design motifs. There can be a recognisable element occurring at different scales and under different guises. It could be a colour combination, a set of angles, or how things are connected. When we echo the details this way, it makes the whole a cohesive thing, even if it goes on and on.

Rather than actively introduce echoes, we instead attempt to maintain some self-similarity as we accept new elements into the design. We reject ideas that add unnecessary variation and invite others which mirror aspects of the elements already present.

When a project grows, more people work concurrently on its development. Having more minds invites more ways of solving problems. But if we restrain ourselves and maintain the vision, we produce work with echoes throughout. When a design starts to veer off, we see more themes or fonts than are necessary. A level of variation is typical of something living but those variations will be thematic because natural processes usually pull from a palette of options.

We can see how it hurts in collections with limp central themes which need more definition. Some buildings try to conform simultaneously to multiple styles and exhibit a loss of identity. Art that attempts to represent more than one aspect can lose itself to meaninglessness[NoO1-01]. The same statement can be applied to a presentation or document that doesn’t make a cohesive argument. I hope this last point isn’t relevant to this book.

Sometimes, all you need to do to create echoes is use the same materials. Materials tend to guide their use and often suggest themes by their limits. So do the restrained usage of fonts in your document or a narrow palette of colours on your website. When you have a lot of different styles, they break the echo. You must tame the styles by grouping them or separate them into contrasting collections rather than a soup of randomness.

Any architecture lacking echoes almost implies it has too much complexity to remain stable. Echoes of forms help create repeatable patterns of stresses and strain, which allows the builder to judge the limits of the material better, as there is less to consider. Without echoes, there are too many variables. Notably, echoes are often absent in software architecture, as DRY causes echoes to look suspicious.

Echoes are used to repeat elements over different levels of the hierarchy of life. We have giant malls, superstores, mini-markets, and corner shops. These are like coarser-grained levels of scale of the same resource. They cater for the differing scales of different types of consumers. From national parks to regional parks to local parks to local common green spaces, the scale of space and use is graduated but echoing. Each is a scale up or down to the next and helps address the needs of a different user, as the users echo across different ages and group sizes. But notice that the scale is not too close. The small park feels unnatural next to another, similar-sized park. A small shop can be more competition for another small shop than a mall is, as a mall brings a different type of consumer.

Property 13 — The Void

The Void is introduced in The Nature of Order, Book 1[NoO1-01] on page 222.

Sometimes, a significant non-presence is what is lacking. Too much of anything is, again, nothing. One way to think of the void is to create a sense of scale and relevance through the presence of an absence. If your design is devoid of high-level focal points, perhaps it is because there is nothing simpler and larger for them to have value in relation to.

You can find the void in buildings and images, but also in the dynamics of a concert where there is a silent section. It presents itself in a film as a lull leading into action. The void is an active lack placed in a prominent location, bringing about a contrast of information—the part that sets the negative example.

In buildings and artwork, it can be a literal space in the design: a great hall, an emptiness on the canvas. The void can be a way to suggest a place to stand to perceive everything else. The void helps in the way the air in a bubble is fundamental to the skin.

The void also forms a contrast, suggesting what would happen if the rest of the compound was absent. It can make elaborate designs seem more intricate without introducing noise. The void can be a focus. It has a sense of solemnity compared to its surroundings and provides a way to suggest maturity, freedom, or a reflection of the self. What you notice when looking at plans of buildings where there is a void compared to those without is that those with a void have immediate character. They seem like there can be a purposeful assembly. It could be as mundane as the central courtyard of a farm. The large open area between the home and the many outbuildings. The yard is a place welcoming a crowd, but also a gap over which your mode of being can switch from work to home.

Christopher Alexander uses[NoO1-01] a typical 1970s American office building as an example of where the void is missing. They almost never have anything like this. Rather than allocate space for a large hall, they have many smaller meeting rooms. The latent void is often filled with desks, divided by stubby walls upon which yellow notes carry essential information such as passwords. As effective as the void is, it is inefficient and so is removed from all possible suggested floor plans.

But the void need not be empty to be a void. It is simply a calm space—a realm of neutrality amid the intentional design. In a computer game with a vast world, between the noise and information-dense cities or markets, there is often a land of walkable terrain. This terrain can act as a void, even when it contains much wildlife. In this sense, the void amplifies the noise in the game hubs by contrast of opportunity.

I do not see the value of the void in code, but the main loop or the central dispatch might be the hub around which all valuable transformations take place. They provide some of the value of the void. They are something of a focus, but also somewhere to which value is contributed from outside. A main loop does nothing by itself. Neither does a message bus. They are the busy nothing at the centre.

Property 14 — Simplicity and Inner Calm

Simplicity and Inner Calm is introduced in The Nature of Order, Book 1[NoO1-01] on page 226.

Some forms cannot be calm or simple. What they are about is their complexity. Examples include patterned wall coverings or other decorative elements. However, some things must gain this property as they grow in size. They must simplify, attain an essence of eternity, and begin to emanate a sense of stasis. Not stuck or dead, but steadfast, unshakeable, and waiting. Noisy things do not seem to have this. Simplicity comes first, then calm.

When a large object, or at least one that dominates a space, has too much character, we want to slow it down and mute its harshness. As beautiful as they are, many Western buildings constructed as a form of praise do not have this property. They have too many parts that seem like they are still in motion. The buttresses are flying, and the friezes are not frozen. The details mean there is nothing on which you can rest your eyes and defocus.

The good examples in architecture are abound with columns and massive stone structures but also with things that are not so much large as they are simple and not heavily ornamented. The inclusion of fewer details magnifies the apparent size of these simple things. We find the opposite in buildings with haste in their lines. They may be adorned with facets that work well but never catch a breath. They are, therefore, not calm.

This calmness can be the difference between word processors and distraction-free writing applications.

In The Nature of Order, Book 1[NoO1-01](page 227), Christopher Alexander’s suggestions for Shaker furniture (which I paraphrase here) are:

  • It uses very simple shapes.
  • The ornament is very sparse, but is occasionally present …
  • The proportions are unusual. Elongated, high, long, tall, broad, etc, but for practical reasons such as to use all the available space.
  • Strange in some specific way, such as unexpected openings or additions. In effect, emphasising an extreme freedom to do what needed to be done.
  • Coloured in a deep way. Right into the wood. Not painted.
  • Finally, everything is still, silent.

These properties generate inner calm in Shaker furniture. We can think of some of these points as not responding to judgement, not caring about appearance but being its authentic self. The mood is all about having a duty and performing it in the best possible way despite tradition or other expectations. This inner calm is a respect for itself and confidence to exist.

parapet Eclectic style, not calm, not simple.

For negative examples, consider the inclusion of a parapet on a modern building. It’s an old form, but it is not in keeping with the theme. It marks itself as quirky and actively different. This is the difference between an eccentric who wears their dressing gown when going out in public simply because it’s comfortable and the ‘look how random I am’ teenager who wears odd shoes as a form of self-expression.

So, inner calm does not exist in wacky objects, whether in architecture, furniture, art, or rugs. Elements that break from tradition for the sake of breaking are not calm. They are tense, waiting to see a reaction. They become dependent on the observer and are weakened by this neediness because they are for something other than their own completeness.

Property 15 — Not Separateness

Not Separateness is introduced in The Nature of Order, Book 1[NoO1-01] on page 230.

This property is not usually introduced, but maintained as part of an unfolding process. Tension increases when it is ignored, so just as with good shape, the property is used as a way to judge change and reject it if it causes deterioration.

Extending a creation can interrupt the space. The themes can clash rather than contrast. The boundaries can alienate rather than define. When this is the case, we need to resolve the differences back to what they should have been already: adaptations to repair a problem.

The whole thing should be cohesive, not just within itself but also within the environment of which it is a part. It was already mentioned in the section on good shape and positive space how the outside matters. Not-separateness extends to how the thing should be present in the space by complementing it, even healing it of something that currently stresses it.

Lighthouse Loud but dutiful, not pretentious

It means not to stand out[NoO1-01] but to be at home. When thinking about not-separateness, we should regard the outer compound of that which we’re observing because it includes the environment. It would stress the levels of scale to have one element stand out compared to hundreds of others nearby.

Christopher Alexander found it lacking in some buildings. They seemed to shout, ‘Look at me!’1 which harks to an ego-driven creation—a creation that wants to stand out because its value is in its peculiarity or distinctiveness. Perhaps it is trying to say something about the creator. These separated things stem from a sense of self-absorption.

But being obvious is not always separate. Consider grand cliffs, which almost cry out how awe-inspiring they are. But they are very much part of the land. A lighthouse on a peninsula is bright and radiates out for all to see—but not for its sake. A church stands out proudly as a beacon for the faithful, but it welcomes and includes the parish with grounds oriented to create a boundary that smooths any transition.

Not-separateness is a pervasive property for all elements because it requires everything to be somewhat connected; otherwise, it is disconnected. And disconnects are rifts we need to heal.

1

[NoO1-01] p 231.

Summarising The properties of forms:

The 15 properties of forms describe the attributes we see in good and wholesome compound objects in the physical world. They are interrelated and cannot be considered distinct attributes of things but aspects of all well-formed compounds. In The Nature of Order, Book 1[NoO1-01], Christopher Alexander makes these relationships explicit, using the following table in his explanation of how they depend on each other.

Property relationships — Christopher Alexander, The Center for Environmental Structure, The Nature of Order, Book 1[NoO1-01], p. 238.

For example, without strong centres, there can be no reference to the good shape of those centres. Local symmetries make little sense when you cannot differentiate, so depends on contrast. What is local about the symmetries is often defined by where it stands in levels of scale or relation to the void. Gradients imply a form can flow over a field connecting to not-separateness but also contrast. Boundaries and levels of scale imply centres and thus rely on strong ones to help define them. Contrast requires separation, so often implies boundaries or characterises the transformation of a gradient to a more explicit transition.

This interaction between the properties would be a weakness if they were building blocks of construction, but they are not. They are the properties of forms produced by taking natural steps of growth. They can be differentiating, extending, relaxing, defining, or refining. But they are never a mindless accretion of novelty or the result of top-down outside-in thinking.

These are all properties discernible when the growth process is a sequence of unfoldings punctuated with moments of relaxation. In effect, fully complete steps, including the moment to let things settle long enough to pick up on subtle hints to the best next direction.

Christopher Alexander connected the properties to natural processes[NoO1-01]. Gradients form as the ends of an element relax and regress to the mean of their surroundings. Levels of scale form as parts split, with some resultant sub-elements growing slower than others as things progress. Alternating repetition forms around weak sequences that collapse and pack into robust structural patterns through annealing or sliding into place. Roughness is recognising the negative, that purity and regularity often indicate a stress on surrounding forms. Boundaries appear by connective structure rather than allow for weak edges. Deep interlocks form when boundaries or contrasts fail to satisfy forces completely.

In effect, Alexander pointed out that all the properties are what we should expect when we grow a solution by the mechanisms natural to the physical world and when we allow it to form through a sequence of steps inspired by local revelation.

Can we use the properties to guide us when creating the sequence of actions we take? We can ask what is not quite right about the properties of our current forms. We can ask what feels stretched or underrepresented in the current configuration. What appears to be missing can always guide us to a healthier form.

But we can only do this if we have the properties for our domain. These properties are unknown with code and, like with prose, possibly absent. However, the payoff for discovering them could be huge.

The properties of colour and beyond

Christopher Alexander did not stop at the fundamental properties of forms but also tried to find fundamental properties in other aspects of design. He studied colour to a great extent and surfaced 11 properties. Try as he might, Christopher Alexander claimed he could not find a way to map the properties of colour to the existing 15 properties of forms directly[NoO4-04]. Instead, he did so with some overlaps.

  1. Hierarchy of Colours
    (levels of scale)
  2. Colours Create Light Together
    (positive space)
    (alternating repetition)
  3. Contrast of Light and Dark
    (contrast)
  4. Mutual Embedding
    (deep interlock and ambiguity)
  5. Boundaries and Hairlines
    (boundaries)
  6. Sequence of Linked Colour Pairs
    (gradients)
    (the void)
  7. Families of Colour
    (echoes)
  8. Colour Variation
    (roughness)
  9. Intensity and Clarity of Individual Colour
    (strong centres)
    (good shape)
  10. Subdued Brilliance
    (simplicity and inner calm)
    (not-separateness)
  11. Colour Depends on Geometry
    (local symmetries)

— Christopher Alexander, The Nature of Order, Book 4[NoO4-04], p. 173.

There are fewer properties of colours than there are of forms, but they cover a very similar set of attributes. Colours are used in various ways, from images to photo composition to wallcoverings to the exterior colouring of houses, cars, and sites. Whatever the usage of colour, it is seen. A web page uses colour, and so does a painting in a gallery. They may not be in the same league, but perhaps that merely depends on the painting and the site. In each case, the spectator of the colour views a scene in which the colour has its chance to shine. So, rather than talk about buildings, paintings, rugs, or objects, I will refer to the entire set of observable ‘colourables’ as scenes.

First, I will attempt to describe a few of the properties as I did for the 15 properties of form. I won’t do that for all the properties of colour, as you don’t need to know all of them to understand how they are similar and how they differ.

Hierarchy of Colours

Hierarchy of Colours is introduced in The Nature of Order, Book 4[NoO4-04] on page 174.

As with levels of scale, a scene has proportions of colour. With levels of scale, identifying the proper proportions relies on the repeated or contrasting parts. With colour, each pigment’s hue, saturation and brightness will affect how much of the image it should cover. The hierarchy of colours is numerical, based on the area of the image that is covered by each colour[NoO4-04]. Invariably, a good ratio is not an equal ratio. Symmetry is never relaxed with colour.

I also note how a heavy colour will appear to take up more space even if it consumes slightly less. To balance a scene, a weighty colour must either dominate or recede. ‘Heavy’ does not refer to dark but always means an imposing colour. I suggest a brilliant green is a heavier colour than a pale pink. I also note that colour weight is very often evaluated relative to the other colours in a scene, so a strong blue in one scene recedes in another by lack of contrast.

The ratios, like with levels of scale, are found by observation and adjusted into place. They are often proportional, in the realms of 2:1 to 5:1, but also include sub-sequences of ratios. A good combination of red, blue, and yellow might be 7:4:1, with the yellow so intensely brilliant, it must be tamed and surrounded by the closer weighted blue and red[NoO4-04]. But colours can work even when they are at more extreme ratios than the physical, as the presence of a minimal quantity of a vibrant colour can impact the entire image by contrast. So, you can often see good combinations where the ratio is as high as 20:1. But images muddied by too many colours have no intense primary. Some colour needs to be the dominant one. In the remaining space, another colour needs to dominate that too.

But dominating the other colours usually leads to having less meaning. Typically, the rarer colours control the message of the scene. The small amount of black or yellow in a light or dark background will dominate the structure, if not its overall hue. This juxtaposition, which defines the visible structure, leads to the next property.

Colours Create Light Together

Colours Create Light Together is introduced in The Nature of Order, Book 4[NoO4-04] on page 179.

A colour on its own says little. For it to have meaning or convey a sense of softness, harshness, warmth, or serenity, it must be brought together with another colour. The obvious case we can think of is the orange and teal colour grading in many modern films. The need to make flesh tones pop led to many movies being recorded and colour-graded to a uniform blue–orange wash. These complementary colours work well to increase the vibrance in each other, even when we have already seen a hundred overly graded productions. They are fundamental colour theory contrasts of hue. But creating light does not have to be about contrast; just what makes an impact by relation. Some colours are raised in their value for a scene by a nearby colour acting to reinforce how deliberate or nuanced the choice was. The difference between red and gold is not so huge, but so much artwork with red–gold colour exists because the red reinforces how pure the gold is, and the gold reminds us of how warm red is when next to an orange or yellow colour.

Christopher Alexander puts this property against positive space[NoO4-04], seemingly because a colour creates a field in which another colour ‘shines’. But he also connects it to alternating repetition. This link reminds us that contrast is the element, not the colours that complement each other. It’s the enrichment that has been brought about.

It’s worth noting that colours support each other, even when they are nearby neighbours, as long as their proportions are right and they have some other way to cast light on each other. This leads to the next property.

Contrast of Dark and Light

Contrast of Dark and Light is introduced in The Nature of Order, Book 4[NoO4-04] on page 186.

Two colours that generally work well together can muddy themselves by not taking care of light and dark[NoO4-04]. There’s hue and saturation in colours, but there’s also brightness. Each primary colour has a natural level of lightness. Green is the brightest, then red, then blue. If you intended to create a nice contrast with red and green but brought the green down to a darker shade, the complement would be weaker. Some artists would call this working with values, not colour. The concept is simply that an image should work as well in black and white as when colour is perceived.

What this means for us

Rather than go into detail on all the remaining properties, you have an idea of the contents of each of them, and you can begin to see how these connect to the fundamental properties of form. The properties of colour are similar, but even when they are very similar, some differences stand out. This makes sense, as how we evaluate these properties has so much to do with what looks natural, and what looks natural is that which naturally occurs.

I believe, in domains of perception by humans, there will be other sets of fundamental properties. Immediately, we may assume there are properties for music, food, art, dance, poetry, and prose. I would not want anyone to presume these lists are final, but I present them here as examples of what we might find if we were to look for properties in other domains.

  • Music — Western world variety

    • Hierarchy of Cadences: almost levels of scale, but closer to echoes and alternating repetition but also local symmetries.
    • Harmony and Discord: contrast and gradient but also not-separateness.
    • Repetition with Variation: not roughness or alternating repetition, but strong centres or levels of scale or gradient.
    • Symmetry of Structure: like echoes and local symmetries.
    • Dynamics: like contrast and the void.
    • Gradient of Complexity: gradients and levels of scale.
    • Echoes of Motif: echoes, but also alternating repetition and simplicity and inner calm.
    • Segments in Time: like boundaries but also some of deep interlock.
    • Key: simplicity and inner calm, good shape, and not-separateness.
    • Clarity: positive space (not having three bass guitars and a singer).
  • Food

    • Contrast: just one or two at most of each food group, but unrelated to the contrast property of forms. This is closer to strong centres.
    • Full Palate: the presence of most elements in the meal, if not each part of a meal, bringing balance. Good shape might be the closest property of forms.
    • Differing complexity: like levels of scale or gradients for the elements of a meal. If everything is trying hard to be the centrepiece of the plate, nothing is.
    • Separate but complementary: deep interlock and ambiguity by making foods that work well together because they offset each other’s weaknesses (a salty meat with a starchy side, a creamy sauce on dry food, or a tangy wet mixture, such as in lasagne).
    • Sequence of granularity: chunks of something in grains or sauce, not two types of chunky things together or two sauces mixing, but salads often break this.
  • Art

    • All the same properties as architecture. This one sticks out to me as it’s the same for any art-form from weaving (Turkish rugs) to sculpture. Art is part of the lived visual domain so shares all the properties of forms as they often are forms and compound ones at that.
  • Dance

    • Complementary Top and Bottom Half
    • Lead and Follow
    • Rhythm at Many Scales
    • Contrast with or against Co-dancers
    • In Place, Around, Along, Reverse
    • Flow and Lines
    • Balance and Energy
  • Poetry or prose

    • Hierarchy of Subject
    • Reveal/Guide Alternation: sequences of pique interest, guide reasoning, reveal conclusion.
    • Variation at All Levels: line length, paragraph length, paragraph and line structure.
    • Repetition and Avoidance of Repetition: in words or style, repetition is a property of its own, but repeating a lot is a mess.
    • Level of Language audience-specific level of wording, keeping it consistent.

It may be my inability to produce good properties for these domains, but I have noticed something: music is not intensely full of properties. Many animals also use a dance or sound to indicate things, so some properties exist. However, they don’t appear to be fundamental. They are almost idiomatic of a species. We know music has changed over the centuries. Different cultures have different forms in their compositions. The set of properties of music is likely stronger than for prose and poetry but much weaker than for art.

Food is also weaker. This may be because we’ve only been making meals to satisfy the eyes for a few thousand years; before those times, perhaps food was just food. Are cakes and courses a relatively recent invention in an evolutionary time frame? Art is a strong contender for properties because it stems from the same roots as architecture. But dance is again only middling. Many animals dance, and we’ve likely been dancing as a species since before we had words to describe it. But I believe it’s idiomatic to a species, not a natural outpouring of the environment in which we exist.

As each domain moves further from our evolutionary basis, it loses the chance for us to have found or sought universal properties. This does not bode well for code or organisational structures.

Is there a reason these fundamental properties exist in differing strengths in these other domains? Why is it that the further we step away from our evolution, the weaker the properties seem to be?

Perhaps the answer is that 15 fundamental properties are fundamental to growth and adaptation. Similar fundamental properties will not exist where the natural reification of a compound is not via a sequence of steps. The fundamental properties only exist because they match the process by which things reach their final form. The properties relating to this unfolding process exist because these forms still exist after millions of iterations of unfolding processes.

Why do I suggest this? Because of the quality without a name. We judge with our evolved sense of quality, our innate sense of what is right. This sense of quality has been trained over millions of years to match what is supportive in the environment in which it was taught. These properties list the metrics our evolved self uses to determine what is good. I believe Christopher Alexander discovered the scientific metrics of quality for humans.

But there is some strange hope that some fundamental properties can be found outside the elements of our ancient world. For example, prose does not appear to be a naturally emerging element of the universe, so any set of fundamental properties can, at best, only loosely mimic the true fundamental properties. And yet, even in this domain, we have some evidence of the quality without a name.

Robert M. Pirsig wrote about the aspect of quality in rhetoric. He claimed his students could read two passages and know which was better[ZatAoMM74], even if they did not know why. And by know, he meant they would agree with him, but in essence, this is the same as Christopher Alexander’s experiments discovering overwhelming agreement in his test subjects.

Robert M. Pirsig explored the elements of quality much further in his second book, building a picture of how it works. He explained how quality is fundamental, and the sense of it preceded our development of analysis methods[Lila91]. To use his terms, we perceive quality without defining it (he called this ‘dynamic quality’) and only then name it, quantify it, and build our analysis methods and reasons around it (he called this ‘static quality’). But it’s all the same quality all the time.

If there are fundamental properties of prose, then this disproves any theory stating they only exist for something ancient and evolutionary. It also suggests the unfolding nature may not be required and may be specific to only some domains. Let’s assume the fundamental properties of architecture and art result from an unfolding process of nature. In that case, we should look for the fundamental properties in whatever process creates good quality in that domain.

We should be able to find fundamental properties of organisational structure or anything else humans can accurately compare. After all, we know the difference between a good-feeling organisation and a less-than-good one. Finding those fundamental properties is a complex task that will take a long time, but each domain will have some and will work to guide the production of a better outcome.

But for domains where we can’t trust human instincts, such as those where the environment and fitness are alien to us, we must first find the measure—the fitness function. Code has a growth process, but we can’t easily tell good code from bad. Finding the properties here will take more effort. A better, healthier process should lead to the properties, and if we were to chance upon the properties first, they could lead to the process. How we find either is still up for debate.

The Potential of Patterns

All software development is design. It consists of decisions. Whether it’s a payroll app or a next-generation computer game, the value of code and data is its information content. All our efforts to create software come down to creating new valuable information by deciding how to act upon existing information. This simplifies the process somewhat. There are many stages to decision-making, but at its core, this is the truth of all designs.

Decisions are affected in different ways as you take the steps to a conclusion. When you design, you turn the data and evidence into information by collecting, filtering, and interpreting. Once you have that information, you combine it with other information you hold: your memories, analytical processes, biases, and tendencies. The combination is your assumption of the true state of things and often includes what things should look like. You then generate a tactic to resolve that gap. That tactic is the action you will take, and the outcome is the decision made real. Changes are the realisation of decisions.

These steps can often happen so fast that we don’t even notice ourselves stopping at any of them on our way through to acting. When you want to get the largest value from a collection, you select a max function because you interpret the largest as the maximum without thinking about it. However, the route to action can sometimes take a little more time and investment.

Deliberate decisions

What slows us down is having to expend mental effort on any of these steps. In Daniel Kahneman’s words[TFaS11], it’s when we use the slow thinking part of our brain to make a decision. Many things can snag our minds into slower thinking.

  • Unexpected data: When you come across data you don’t know how to interpret, you need to read up on how to analyse it or find new tools that can. When data seems missing or you can’t detect it, you need help getting what you need.
  • Unexpected interpretation: If the data is familiar, but the interpretation seems odd, you seek causal relationships or advice from a person or other resource.
  • Lack of assumptions: We might need to think more carefully about what data means and what we can assume because of it. Your findings may break existing assumptions, asking you to revisit past decisions.
  • No default tactics: If conclusions are unexpected, you need to discover new tactics to employ. When an action is tedious or takes a lot of energy, that also slows us down, as we have to expend willpower to commit to the task.

But at some point, you can’t decide any more. There is a limit. Medical practitioners recognise decision fatigue, indicating, at least in my uneducated reading, that we can make people ill by creating environments which demand of them a high number of decisions. So, if we can’t increase the number of important decisions we can make well per day, we need to find out how to filter the decisions to only those that are essential.

Patterns help by making decisions easier or making them for us the same way coding rules and auto-formatting tools decide some of our actions for us. We can begin to find new processes to help us by measuring our productivity by counting valuable decisions made per day.

Reducing cognitive load by limiting the number of decisions that must to be made could be a positive effect of design patterns. They help frame data into interpretations and offer guidance on how to reveal the tensions in a situation. Tension is an is–ought state. Something is amiss. You can plainly see a violation, something which is but ought not to be or something which is not and yet should be.

Patterns can teach us tactics to use when resolving the gap. The framing is usually the problem statement, the context, and the forces. The body of the pattern guides the interpretation and suggests tactics to close the gap. The GoF patterns[GoF94] missed these steps. They did not structure patterns around what we would perceive before the solution was in place.

Discernment

Your ability to make decisions relies on information, but acquiring that information isn’t always a simple case of looking for it. Sometimes, you lack the skill or discernment to turn your observations into information with the required accuracy. Other times, you might need to learn how to recognise the information. And then there are times when you don’t even know you need the information.

Our ability to see things comes from our ability to discern—to tell things apart. The difference between one thing and another is purely in the eye of the beholder. We build tools such as microscopes and telescopes to give ourselves a way to extend our ability to see. Nonetheless, it still comes down to us using these instruments to augment our senses to detect meaningful differences.

The problem of discernment is a problem of training. We must attain a threshold level of ability to detect differences between materials, actions, and environments. Without it, the best decisions are beyond our reach. Good decisions are based on good information, and we define good information as allowing us to progress towards a good decision. It sounds circular, but it’s not. It hinges on good decisions—those which, given the constraints, introduce less waste or more benefit than any other alternative.

Wisdom and, by extension, a pattern, allows you to make good decisions where others may not even see options. If we only think of wood as a singular material, we won’t be able to decide what type of wood to use when building a structure. Someone with deep knowledge of the various qualities of different hardwoods and softwoods could make an educated decision about the best size and selection of materials for a particular task. Someone with a good grounding in concurrency patterns can better decide how to resolve distributed services.

Patterns might not always give you an answer or solution. In fact, wisdom can let you know when what you want is impossible, such as when someone asks you for a consistent, always-available service that handles network splits.

When we only see the surface level, we are blind to systemic problems. Being unable to derive an intent or detect the system behind a set of attributes often means it does not exist to us. And you cannot react to that which you cannot sense. Think about how you would not feel safe putting your hand into a drawer full of knives if your hands were numb with cold. When you have problem-centric patterns, even if they don’t provide solutions, they might still save your fingers.

The potential for patterns to replace discernment training is very high. Many organisations recruit for or train their members in skills to fill gaps in the ability to act but not in their capacity to sense. It is important to be able to make changes, but it’s much less valuable if you don’t know which change is best to make. Or whether a change needs to be made at all.

I have experienced three ways to gain discernment skills. You may have experienced others. There’s learning from experiencing it yourself. This is the slowest but the most memorable form of training, as the problems are personal. Then there’s official training, which doesn’t tend to stick, but at least it often comes with checklists to augment your senses. This applies to safety and security training, among other areas, as there is frequently so much to keep in mind. The last is via anecdote. I put patterns in this category. It’s someone else’s experience delivered in a consumable form. This is probably the best way to transfer discernment skills. You don’t have to learn from your mistakes, but the drama of following the story makes it stick much better.

Information

We can only make decisions based on information. Too much information hides valuable information in the noise. This is why we say timely, relevant information is valuable. We only want what’s necessary. But what is necessary?

  • Anything that answers a question we actively ask.
  • Anything that answers a question we didn’t know we needed to ask.
  • Anything that reveals a new important question.

We can only make decisions based on processed information, which implies that some actions you take will cause a future decision to become available to make. This indicates there is an order in which decisions must be made—a good sequence.

DevOps seems to have emerged from an extended period when management looked at the wrong information to make their decisions. They looked at costs, not profits, as they were easier to measure and estimate. They had narrowed their attention to making sure everyone was working efficiently. It took Eliyahu Goldratt to shake them up and make them realise they should have heeded other metrics.

Eliyahu Goldratt’s book, The Goal[Goal84], demands that everyone scrutinise the flow of work, not the busyness of the machines. DevOps thinking is meant to capture this thought process, but we still find managers making sure everyone has work to do, not making sure important work is making progress.

Patterns have the potential to help by reminding us of what we want to achieve, which metrics to look at, and how to measure them. The wholesome design of a neighbourly street is one where it is inviting for children to play and sitting outside is pleasant. Metrics for this would be things like how fast cars are likely to travel down the road and how much shade you have in summertime. Planting trees gives shade, and making the streets narrow and corners tight slows the traffic down. The problem is defined, the metrics are known, and the tactics for improving them are offered—the solution is up to you.

Revelation

You cannot make decisions without some level of discernment. Decisions are based on information, and information presupposes a capacity to interpret data (the observable) into information (phenomena).

I prefer when the details of a decision are explicit and the consequences identifiable. These features allow me to be confident about my decisions and judgement calls.

When I think about design, I think about simulation and experimentation in the malleable fabric of the mind. In this sense, we can make decisions without having to do all the groundwork to get to the point where we have all the details. You mentally simulate the groundwork and have an idea of the actual situation before you get there. This gives you the context you need to make a reasonable decision. But a simulation is not the real world, and you are a human who can readily hold two contradictory beliefs simultaneously. Only the real world, with its consequences for contradictions, can give you the feedback you need. Only the real world can reveal the problems with your design. This is the reason why plans fail.

Design is a verb and a noun. The verb creates the noun the same way structure is a verb that creates structure the noun. Both are acts of connecting things to complete a balanced final form. Both are about resolving the forces in systems across multiple domains. But a lack of revelation limits design on paper and in the mind.

Christopher Alexander’s methods rejected theoretical design work—design purely on paper or simulated in the mind1. Instead, his paper processes only existed to support organising his active methods—those methods that revealed the limits early in the process and found beauty in working with them. His buildings were not made for postcards or presenting to committees to get them to sign off on a budget. They were made more than they were designed, and made to work well in their environment for the people who were already part of it. In effect, he avoided compromising his vision by finding the right decisions to make early.

So, how do we avoid compromising our vision? How do we bring the right decisions to the front and make those good decisions early? Astute readers will have already made the leap at this point to the iterative approach of agile development, where working code is more important than following a plan. Working code is the same as physical examples and helps discover unexpected constraints and limits. It prefers smaller working pieces over making a beeline for the final destination, which is precisely the difference between Christopher Alexander’s work and the general process of modern architecture. This is why some saw Agile as the silver bullet.

We want these revelations in our design. We can use agile approaches, but is there an even faster shortcut?

In Pattern-Oriented Software Architecture Volume 5[POSA5-07], the foreword is by Richard P Gabriel. In it, he reiterates, ‘Design is the thinking one does before building’. The foreword leads down a path of thought touching on many aspects of design as an activity and an artefact, leading us to conclude that patterns help us avoid getting stuck in a rut with our biases and expectations. It’s a wonderful foreword I have read multiple times. Towards the end, it talks about AI-generated solutions for problems in response to problems that are intractable from both the outset and outcome.

This does not refer to the contemporary understanding of AI but to the genetic algorithm (GA) approach to discovering new designs. The approach of putting a generator and fitness function into a space to emulate random mutation and combination, as well as the selection process.

The core engine of a GA AI doesn’t have our biases. Training data brings that along later. It does not have our genes and therefore does not have our sense of quality. In the real world, evolution also does not have a bias but selects only as appropriate. Selection is not random and the candidates are, within bounds, arbitrary and free of intention. In effect, it is goalless and naïve.

Design is the result of the creation achieved while in the conceptual domain. That is, to design is to iterate mental constructions. Running a dream truck across a dream bridge is testing, if somewhat abstractly. When we design within our biases and expectations, it’s fast and safe. When we have a novel thought taking us away from our expected design, we call it a revelation. The AI way of designing is all revelation because it doesn’t have a normal approach, but it does have an externally provided means to recognise what is good—us, watching. Because it is so fast at designing, we add metrics as a procedural stand-in for our awareness.

It’s all revelation because a GA AI has no inhibitions. It will drive a truck over the bridge, sure, but it will also attempt to drive the truck backwards, under, through the water, and anything else we would have immediately dismissed. We see such assumption breaks in many examples of GA where people build fitness functions without laying down principles. One of my favourite examples2 of this is a GA that was tasked with creating a walking human, but instead of making progress by walking, it rotated the feet beyond their natural limits, effectively turning them into wheels.

Design patterns cannot replace the childlike naïvety of an AI, but they give us something else. They provide someone else’s experience with their biases, not ours. Someone else’s rut can be just as revelatory. Patterns can offer a way to find other points of view for inspection. They could be a way to break us out of our rut because they reveal the thought process of another mind on a similar journey.

In summary, a design pattern has the potential to reveal the set of consequences and new decisions that must be made. It’s valuable because we avoid the extended physical and mental costs associated with gaining these insights first-hand. A well-written pattern identifies your next position for deciding the subsequent action. A pattern can tell you all the decisions you must make and the choices that are already determined for you.

1

His program at the University of California, Berkley preferred to judge students by real constructions over paper designs. He declined[NoO2-02] to submit his students’ work to juries (architects who would judge a student’s work) because they were judging the paper design, not the realised building, which to him ‘did not make sense’.

2

I have been unable to source the original work. However, surprising results are common when using GAs, so a quick search should return a feast of exploited loopholes.

Acting with experience-free wisdom

You can consider wisdom to be pre-made decisions. As opposed to knowledge, wisdom isn’t about facts and techniques; it’s about predicting the future through hindsight. When you’re in new territory with any endeavour, you must work from first principles, figuring things out one step at a time. In this context, wisdom is when you have already seen where these steps lead and decide against taking them.

Wisdom includes insight—the property of extrapolating from a situation to an eventual outcome. Using design patterns to obtain insight is hard without intensely studying the systems emerging from their application. And sadly, due to their solution form, they lose their potential as insight generators.

If most of the GoF book[GoF94] had been presented as techniques for nominalising decisions, then it could have inspired a generation of developers to find new strategies around the core concept. The principle of nominalisation is using a named object rather than procedural code to differentiate behaviour. For example, there’s a typical pattern in languages without lazy garbage collectors to have object lifetimes drive the housekeeping of another resource. In C++, it’s called RAII1. This pattern is helpful. It’s a strongly recurring pattern in C++, but you can also see it used in Visual Basic, D, and now in Rust. It’s the nominalisation of ‘at the end of scope’ or ‘just before returning’. It’s also a great pattern that isn’t called a pattern, because it’s not in the GoF book.

Awareness of limits and refinement of requirements

We can think of wisdom as awareness of any system’s soft and hard limits. The limits can be the theoretical bounds of an algorithm or regular hardware usage constraints. Not being aware of these limits leads to a developer spending time in the abstract design stage of a solution, which is theoretical but not practical. Limits found and imposed early can short-circuit doomed designs, saving much developer time. Some limitation patterns are mathematical, so they can answer questions even with far-future hardware advances. Others are guidelines on what physical or numerical situations can lead you into trouble.

When you discover limits, you can tell whether satisfying your requirements is possible. Limits always exist in things and affect what is achievable, so not knowing them can hinder your decision-making prospects. If you have a known throughput on a channel of 100MB per second, you cannot transfer 1GB of data in under a second. What can you do instead? Once you know your limits and requirements, you can start making decisions. If you know you can cut half the input data without issue and further compress it by 80%, you know what to do. If you can’t, you know you need to look for alternatives. This is where the refinement of requirements aspect of patterns hits hard.

A pattern can provide clues to whether your requirements are too broad. Maybe you can get by with only some of your data up front and stream the rest later. Many content providers see this recurring problem pattern. Online video and audio services usually provide incomplete media. They supply just enough to form a buffer to protect against skipping. They always assume you will consume the content slower than they can deliver it.

The same pattern applies when an application needs to present content triggered by external inputs. The problem appears when the data required to show a complete document is larger than the working memory of the program or machine you are running it on, but the data required to show a specific piece of the document is small enough that you can load a portion thereof, but not everything.

The requirement started as ‘need the data loaded’, but the pattern asked you to reconsider your requirement as ‘need the data you must have for the next N seconds of the runtime’.

Just in time

For example, consider a system where hundreds of possible voice snippets will play in response to user input. The obvious and impossible solution, but one commonly employed in the past, was to load all the samples into memory and play them on demand. This is why so many games could not have lots of dialogue, beyond the cost of storing it all and the cost of recording it. The current wisdom now dictates loading them all into memory is a waste. It takes up too much memory, and only a tiny subset of the loaded data would see use.

The pattern for this problem talks to the other limits you need to consider. Any data that is both on-demand and played back as a time series of samples can be buffered. The remaining audio of any dialogue can be loaded from your media when the initial playback commences. Only a portion must be immediately accessible, waiting to play in response to input. How large that portion needs to be depends on the seek time and bandwidth of the media, plus any latency caused by other systems interacting with the IO driver. Then there’s context. How many of these initial samples must be present? Maybe not that many. Some are more likely to be required than others. That probability will change over time, allowing you to swap out what is buffered as the context changes.

Replace audio samples with customer account transaction records, and we have a similar situation. Holding the previous month’s transactions nearby (on numerous servers) would be better than all or none of them. Holding them all would require expensive servers, but loading records from backing storage when a client connects leads to high latency. It’s a repeating problem with a recurring set of invariants for all good solutions.

This example addressed knowing the hardware’s limits and how to configure your software in response. But it was also about understanding the limits of the context, the soft limits imposed by the requirements. All these pieces of information are easy to find when you know to look for them. That’s the power of wisdom. It is knowing what information to look for, what to filter, and how to interpret it into assumptions you can act on.

When looking for a red sock, you will not find a blue one.

There is humanity in any design. Patterns invite all forces, not just code but also coders and users. Patterns allow us to avoid or recognise biases that lead to inferior designs.

Our perspective is modal. When we know what we are looking for or expecting to see, we will see it, and only it, more often. We see this effect on a larger, longer scale with cognitive bias. We have a subjective worldview we feed with information, and when we process the information, the strength of the bias affects how we interpret all these new events. If we are particularly open to adopting new evidence, we might adjust our subjective model of the world. However, if the change to our worldview is expensive, we are more likely to claim we misunderstood the event, it didn’t happen, or it was contrived against us. What does expensive mean? Expensive ties into why we cannot find a blue sock when looking for a red one.

When not under stress but casually making decisions, the mind is a complex structure of interconnected beliefs that may or may not contradict each other when brought together. We think with only some of our knowledge at a time. When we are in a casual flow of life situation, we do not change our minds. We might pick up new information, but it won’t affect our core beliefs. To avoid changing our core beliefs, we perceive through a set of non-contradictory mental frames. Events or realisations that bring conflicting frames together cause us to grow or learn. However, we can also consciously or unconsciously decide to assume the external event was an error. In part, this is why we say a teacher can teach all they like, but only if the student is willing to learn will there be any learning going on.

This relates to our sock search. We can be in the mental frame of looking for a sock with a particular pattern and expect it to be red. When we see a sock with the correct pattern, but it’s blue, we filter it so that we don’t even see the pattern or the sock. It was the wrong sock, so we skim over it.

The value of ignorance is knowing you don’t know. We often know things that aren’t true, and that causes us a lot of trouble. It’s a shame you can’t teach ignorance.

Patterns can help us remember what we’re looking for by helping us look for a patterned sock, not a red one. When a Strategy pattern asks us to decouple something, it reminds us to separate the decision to act from the decision about which action to take. Data-oriented design reminds us to consider whether an object truly has an identity. When we view our structures through the Interpreter pattern, it asks us to consider whether the information is in the objects or in the way they are linked together.

1

Resource Acquisition Is Initialization. The Wikipedia page calls it an idiom, but by now, you should be able to tell it is a pattern because we can define the context and the forces that it resolves and how it recurs. https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization

Stability through structure preserving change

Refactoring to Patterns[RtP04], by Joshua Kerievsky, is a book about repairing and increasing code quality. It marries the two ideas of refactoring and patterns, which means instead of refactoring alone, which is a technique of making change safe, it brings in the element of patterns, which should restrict those changes to the right ones. Before this book, software patterns were primarily1 presented as static forms.

However, this idea that a pattern is a static form did not align with Christopher Alexander’s works. It’s not how he described patterns in The Timeless Way of Building[TTWoB79]. Patterns were never just an end goal; they were always a process as well. They are a verb and a noun, a self-forming system reinforcing its state through forces and reactions.

Refactoring to Patterns is a book about design patterns in a much truer sense. It is a book on repairing. It is indispensable in this era of building things fast and worrying about maintenance later. The move-fast-and-break-things era might be somewhat over, but we live with the legacy of a decade of products formed by this mindset. People made things that were insecure, unscalable, invasive, corrupting, and, at the same time, ubiquitous, foundational, and critical to our progress.

Refactoring to Patterns admits we don’t get it right the first time. The book recognises the state of systems and repairs them. We know we write code before we’re sure about it, as that’s how you find out it was wrong. We build it up and then refactor it when we recognise the latent patterns.

Before I came across this book, I was stumped to find a way to marry the 15 properties to software development. I had only recognised a few properties, such as repetition with variation and hierarchy of refinement, and some from my preferred paradigm, such as decisions made early and request actions, express interest in state. The value in both refactoring books was in the inspiration that I could find these properties from their constructive steps. When Alexander discussed the 15 properties, he was never far from describing how the property arrived. Each property came with a process to create it from what was around it or was a byproduct of a wholesome process. If the same properties exist in software, they may undergo the same type of transformation. This means they will likely turn up during refactoring or in regularly repeated feature development processes.

Refactoring, with patterns in mind, is an unfolding sequence. Because refactoring is not meant to change behaviour, it is structure (or value) preserving. What’s fantastic is that this means we have two positive outcomes. Pattern refactoring allows you to migrate an old codebase towards a better state. It also suggests thinking about patterns can be put off until later. Refactoring always applies to existing code, implying you don’t need to figure out what patterns to introduce until later. Don’t take this as a suggestion to avoid any up-front planning, but you can stop beating yourself up over not realising you needed a pattern until halfway through a project.

Scale

As a project grows in size, neighbours become distant acquaintances. The original project may have had one developer, then a team, then teams. Refactoring good quality systems into larger ones is always much more successful than creating a large system from scratch. Refactoring appears to be the only way to unfold a system of code. Rewrites don’t carry any of the lessons learned in the code; those exist only in the fallible minds of the team developing the replacement (this is assuming the team is the same as the one that created it; a fresh team is hopelessly cursed to reproduce eerily similar bugs the second time around).

Why is this the only good way to develop a large system? It’s a simple case of acting on incomplete data. In Domain Driven Design[DDD04], the concept of a Ubiquitous language drives a way of thinking about the code centred around the real problem in the language of the problem domain. During development, programmers and architects regularly have revelations regarding the meaning of words used by the experts. These revelations create new meaningful connections between aspects of the code. These connections demand you refactor the code. Revelations often lead to object splits or joins, refactoring objects into more entities or combining methods or objects that aren’t as different as presumed. Sometimes, we reinterpret an existing identity as a transient form of another object or bring another entity into existence that has so far remained unmentioned.

All these changes are only required after many rounds of returning to the experts to get their evaluation of the system once it has been designed or even implemented. A non-expert can only deduce some of the interacting objects and activities they are involved in before writing the first lines of code. We cannot expect the experts to know what they weren’t sharing due to the curse of knowledge2.

The information about how the system should work was always there, just as the laws of physics, the materials, and the landscape upon which a building is built are all there from the first moment of the contract. And just as Christopher Alexander found, even with all that information, the solution is intractable for the human mind. It needs augmenting with processes and tools—a piece-wise process of experimentation, evaluation, repair, and remodelling. In code, this is an iterative and incremental refactoring process.

Architecture needed a way to introduce new features incrementally and evolve the design and implementation. What Christopher Alexander developed was the process of unfolding.

1

Even though the GoF book does mention the patterns as targets for refactoring, it’s near the end of the book (p. 353) and not introduced as a step for each pattern, so it’s easily overlooked.

2

https://en.wikipedia.org/wiki/Curse_of_knowledge the failure to consider how much you take for granted. Like how old programmers, upon observing a limit of 255, assume something about the implementation, but non-technical people would have no idea what that would imply.

The unfolding process

In The Nature of Order, Book 1[NoO1-01], Christopher Alexander related the quality without a name to his fundamental properties, piecemeal growth, and generative construction. Before he reached this point, he had been looking for the property everywhere, finding inspiration in the strangest places. One moment of note is when he began to see the indescribable quality in antique Turkish and Persian rugs. Once he started recognising it there, the fundamental properties became evident to him. Around the same time, he was experimenting with patterns and structures to find out if there was a repeatable activity that could prove something deeper was going on. His experiments showed, without room for credible doubt, that even though people had different tastes and opinions, there was a test they could perform where they would overwhelmingly agree1 on what things were better than others.

However, the surprising, and the important, thing is that the mirror-of-the-self test does not correspond to our everyday sense of what we like. When we really concentrate on the life in things by checking how much self they have, we find that sometimes, yes, the test does confirm our liking, or our preference. But at other times, it gives us quite different results, which are not stereotypes of good design but which surprise us, shock us out of our complacency, and make us recognize that we are confronted here with an autonomous phenomenon, that has a great deal to teach us.

— Christopher Alexander, The Nature of Order Volume 1, p. 334

Those following the works of Christopher Alexander might notice he referred[NoO1-01] to the writer and founder of the metaphysics of quality, Robert M. Pirsig. What Alexander said about quality may resemble the arguments in Pirsig’s first work[ZatAoMM74], where the reader is invited to think of quality as ineffable, neither subjective nor objective. To me, it was clear he thought it beyond our current means of understanding. However, Alexander’s writings claimed we could understand it[TTWoB79].

Pirsig claimed quality was a universal feature[ZatAoMM74], that we could determine whether something was good or bad. However, he also claimed we mightn’t know why. His thesis relied on humans to interpret the world and make judgements to find the quality of something. Pirsig seemed unable to escape using human judgement to perceive value.

Pirsig’s later work[Lila91] pointed to freeing the human mind of its limits to see more freely and consider things without inhibition. But he remained stuck on the requirement of human perception right up until the end.

Alexander, on the other hand, kept pursuing the properties of quality. His research continued towards a technical solution to the definition of quality in all things.

As Christopher Alexander’s work progressed, he moved away from patterns and towards this more fundamental building block of architectural form[Grabow83]. He sought to understand where the patterns came from and discovered their essence in architecture as the 15 fundamental properties. He could describe existing patterns in terms of the fundamental properties and referenced them in The Nature of Order, Book 1[NoO1-01] at the end of every property section. I believe they can predict new patterns, firm up the weaker ones by making what was missing visible, and can be used to find fault and dismiss others outright because they are incomplete or idiomatic.

The fundamental properties of forms are viable in many human crafting and construction domains. However, they’re not directly applicable to code as there is little geometry in code itself. Instead of local symmetries, we have variability and commonality. Instead of deep interlock and ambiguity we have polymorphism and procedure. The fundamental properties of code are in a different reality, where there may be fewer dimensions but many more domains.

The fundamental properties of forms apply to application development, where it touches the user’s reality. Many of them are helpful in user experience and interface design. Some help with problem domain analysis, as the problem is usually real-world. But for the code itself, they do not fit, akin to how they work well for animals and plants but not for DNA.

When a domain has different rules for what makes a thing beautiful, the fundamental properties change and the rules of unfolding change in line with the constraints and options of that domain. We have to find the beauty by some other means and provide convincing arguments as to their value, as we will not have overwhelming agreement by default when we find the best properties.

When it comes to code, unfolding always seems to be one of the following:

  • Recognise a lacking option in the code and expand upon it.
  • Recognise some commonality in the code and collapse it.

We have found at least one property, that of differentiation or combination. Fixing code like this links to levels of scale or strong centres. But most properties are not present. Unlike the real world, we don’t have a property of Alternating Repetition, or repetition in general, as duplication is something that the code does itself. Duplicated code has no value, and repetition often leads to unintended variation.

So, what are the 15 properties of software? They aren’t the same as building and architecture, as there is no geometry to the habitability of source code. Perhaps Melvin Conway found them in his relatively obscure paper Towards Simplifying Application Development, in a Dozen Lessons[TSADDL]. But rather than list them, he stumbled across them all in one step with the transform-in-place application development technique. All properties of unfolding code may come from this. Introducing and linking a new element to another changes the application without ever breaking the completeness. Although the process feels like unfolding, I can’t see a world where this is how we develop applications in general. I don’t have the imagination to envisage developing something as complicated as a word processor with his approach.

We may have two super-properties—mutation and extension. The first super-property of mutation comprises a few transformations hidden within refactoring actions, things we do to code as a matter of course and believe are safe to apply:

  • Extracting functions.
  • Making code generic when it’s called for.
  • Splitting classes into different parts with different responsibilities.

These are all ways code unfolds while its capabilities remain unchanged. But then we have the super-property of extension—of added features. The options for features are particularly numerous. To name a few abstract variants:

  • New parameters for existing methods.
  • New actions we can perform on our data.
  • New data on which to perform the actions.
  • New ways to receive or interpret data.
  • New events to respond to or generate in response.

There are many patterns based on subsets of these features. Some patterns are primarily feature work, while others are refactoring. However, all software design patterns appear to decompose into one of these two super-properties. I hope I am wrong and someone does find a finite set of forms, but they will only find them if they start by looking at the patterns that use them.

The unfolded design

The actions of an agile process are iterative, incremental, and use small steps. Agile development tends towards an unfolding process. The teams are often feature-oriented for this reason, not discipline-aligned. When organising teams around a feature, visible progress becomes the only necessary metric. But if your features must cross team boundaries for communication, they will have to team up at a higher level to achieve any goals, increasing communication overhead.

Unfolding, or progress, has to be completed as a single step. Otherwise, the process isn’t clean. When tasks are ‘thrown over the wall’, the system is not in a wholesome state at all steps—it’s no longer unfolding. It’s much more productive to have many smaller intermediate steps with no visible improvement but whereby each step leaves the system in a wholesome state. Think of it like getting from one platform at a station to another. You could try to cross the tracks, but that’s not safe. Instead, you go up the stairs to the bridge (no visible progress), cross the bridge (minimal, irrelevant progress), and finally walk down to the other platform (finally, some visible progress).

Running across the tracks is dangerous
Running across the tracks is fast but not safe.

The physical building patterns process had everyone grouping around problems and resolving them. The trick was to tackle the problems that made future problems easier to solve. When they resolved the overarching problems of space and utilisation, most other problems had simple solutions. In software, we need to find out which patterns are our patterns of space and utilisation. Procedural design suggests the most important patterns are data structure and procedure design. Object-oriented design suggests the best patterns concern representing the problem in the domain object model. The Domain Driven Design approach suggests the most important patterns lead to understanding the different interpretations of data and available actions on those interpretations.

1

[NoO1-01] from page 325 onward is an explanation of the concept of the mirror-of-the-self, which can be used as a test to determine whether something is more whole.

Patterns as complexity inhibitors

D. L. Parnas wrote On the criteria to be used in decomposing systems into modules[Decomp72], a paper that sought to help us reflect on how to effectively cut a program into modules. The paper compares two approaches to breaking down a piece of software: one approach based on steps, the other a method that cuts along the lines of information hiding. You could read these as decomposition by action or by object. I interpret this paper as stating that object-oriented design is better because changes are usually limited to one place.

Whatever your language or programming paradigm, change is inevitable, so the cost of change always decreases when you cut up your program along those change boundaries. The software design patterns of the GoF book helped here by suggesting a collection of ways to decouple code using abstraction and inheritance. But we seldom ask what should call us to action. When should we decouple things? Patterns help us answer this when they include the criteria for when to apply them. Or, put another way, if they are problem-centric, they tell us when to act.

I am referring again to the foreword from Pattern-Oriented Software Architecture Volume 5[POSA5-07] and the general tractability of problems. The problems that are barely tractable from the outset are those with more variables than the designer can think about at once. These problems are problems of complexity—of coupled variables. The principles behind Notes on the Synthesis of Form[Notes64] (Notes) were all derived from attempting to solve intractable large-scale designs. They were about solving complexity. D. L Parnas’s concept of splitting along boundaries is another strategy for reducing complexity applied to the life cycle of development and maintenance. Both are about reducing the impact of change. Notes on the Synthesis of Form is a guide on structuring your process to avoid excessive complexity during the design phase. Modules reduce complexity by clumping related changes during maintenance. The fact that patterns emerged from Notes on the Synthesis of Form must mean something about their potential for reducing or inhibiting complexity.

The cost of complexity

Complexity increases the cost to grow or change. What Christopher Alexander wrote in Notes on the Synthesis of Form was how he recognised the inverse. The cost to grow or change is an indicator of complexity. The cost of any change relates to the number of side effects or how many things need to change in response to the adjustment. Any obscurity about the repercussions of any change increases the cost further. With this in mind, his work directly attacked the problem of how to build a large-scale project with many interconnected pieces without the complexity dooming the project.

We can see the same situation with large programs. Any sufficiently large program has many connected pieces where changes in one area lead to changes in many others. Ideally, we would enable changes across all project areas, with every change isolated to a single location, and not need to make multiple adjustments throughout the project.

Christopher Alexander’s approach was a universal principle of deciding where to draw the lines by finding the smallest number of connections under change[Notes64]. Rather than map out a project by preconceived concepts and familiar hierarchies, he tried to stay objective about each piece of the final product. He only drew connecting lines between elements that changed in tandem.

For example, if I try to build a code project along the same lines, I can start with a file format I use when storing configuration. If the loader only loads to a simple accessible data structure and can serialise that simple structure back out again, we’ve connected the change requirements of serialising and de-serialising with the File Loader class. Changes to the configuration file format affect both reading and writing. The loader doesn’t provide an API to access the configuration data structure from within the application. We delegate that job to a separate set of methods or objects: the Config API.

Simple Config API Split

Changing the configuration file format affects the serialisation layer but has little to no effect on the things that wish to be configured. Changing the API for using configuration data loaded from the configuration file would seldom affect the file format. It only affects code using the library to fetch configuration values. Therefore, the configuration file format is separable from the configuration data API. The simple data structure object in the middle is a complexity inhibitor.

Now, think about how projects change. New configuration data is the most common. But, as we know from the long history of relational databases, it’s almost always possible to migrate data from one schema to another. Adding new ways to access the data is another routine change, such as loading from a database or through IPC or other direct memory access. In any case, the reader/writer is the only aspect that needs to change. Even migrating data from one type of reader to another is straightforward so long as the loaders load into the same simple intermediate data structure.

The other recurring change is an extension to the query language for configuration.

As you can see, it’s quite a departure from how we usually think about modularisation. This process separates the concerns cleaner than an object-oriented design normally would. The simple data structure in the middle is completely exposed but meaningless data, which seems to go against the principle of encapsulation, but does it? The point of encapsulation is to provide a succinct and simple way to query and manipulate information in the system. I think the usage API does that.

What about data-hiding? If the loader is provided with a place to store the data and the location is handed off to the usage API, then I feel that is covered as well via a configuration object, coordinator, or mediating process. Neither component owns the simple data-structure, but that changes much less frequently, so we cut across that line.

But what if the simple data structure does change? Sometimes, more advanced queries are required, and you wonder if your XML file shouldn’t have been an SQLite database. But wait, rather than implement queries on your simple data structure, take the stairs, walk across the bridge, descend at the other side, and start using SQL queries.

Writing a query language is dangerous Adding queries to a simple data structure is fast but not clever.

We might have swapped out the simple data structure for a database, but we only needed to change one thing at a time.

  • Step one allowed us to read and write SQLite db files to and from the simple data structure.
  • In step two, we migrated1 from storing XML to an SQLite file while maintaining the simple data structure.
  • Then, in step three, we migrated from querying the simple data structure to directly querying the SQLite database.

When we link parts of a design that aren’t strictly necessary, we couple them and make them more complex. Deriving patterns using the process from Notes on the Synthesis of Form helps us reduce unnecessary complexity and cost. Using change frequency as our metric for cutting apart components produces systems that change more easily over time.

1

for this step, we can load the XML into the intermediate representation, then write it out to the database format.

Why Did Patterns Fail?

Some modern architecture is ill-fitting for humans because it puts meaning ahead of purpose, using a building to portray an idea. But some is rotten in the same way digital paintings were terrible in the early days of graphic design on personal computers. Back then, it was almost always just showing off what could be achieved with the medium. Some architecture suffers from the same lack of artistic value because it lacks meaning. It’s devoid of why and only full of a sense of ‘look-at-what-I-can-do’. We see other examples of this in some launch titles for gaming consoles and in the physical world with such examples as sport utility vehicles with no comprehensible reason to exist other than as status symbols. This type of artefact is the output of entities with a point to prove rather than the desire to fulfil an authentic need.

Christopher Alexander put this type of architecture, the type which he claimed[NoO2-02] was generated from a different process of structure destroying transformations, into a different class[NoO3-05], and his approach to building attempted to rectify it. By a series of steps of extraction, he took the architectural process back into the hands of those with a personal, not just financial, stake in the outcome. People who knew deep down if their requirements were satisfied.

This is how he became a rebel. His actions disturbed the status quo. Few architects of the establishment outwardly supported his ideas. Teachers working with the framework of the modernist school of thought and locked into its rewards worked to hinder his impact. But just at the same time as he was cast out from architecture’s mainstream, he was dragged onto the centre stage of many groups outside of it.

During the early days of the software design pattern movement, it’s possible Christopher Alexander was better known in this particular field than he had ever been as an architect. Thousands of developers referred to him or made things with an eye to the design pattern approach. Beyond software, there was a general interest in patterns to capture solutions that were otherwise difficult to adapt to new terrain.

However, in every domain, establishments would protect themselves, even when the ideas took hold. In every system, change was detected, and feedback mechanisms would push the new ideas out or wrap them up like a pearl in safe material, neutralising them and ruining their potential.

Not all establishments react the same way. They do not all start from the same place. Most systems centre around protecting those in power and the structure that gave them their seat. Even without an official hierarchy, a hidden structure will defend itself. The establishments that Christopher Alexander came up against tended to originate from Western world values, where individualism was protected over familial values or societal norms. As you study the history of his troubles, a pattern emerges of him running up against politics and profiteering. In every project, the local people, those he built for, were never the problem. The problems were most often found in the ego of an external entity.

The image-driven process

In The Nature of Order, Book 1[NoO1-01], Christopher Alexander wrote about his concerns about some architects who appeared to be seeking visibility. That is, we were invited to recognise the architects and not so much their buildings. At times, he seemed exasperated by the ego of other architects and he brought up how these buildings were built for their appearance on paper and in photos rather than how they operate[NoO1-01]. They were image-driven.

On this foundation, we can build up a framework for thinking that concentrates on what happens when an image drives the construction. An image-driven process will not rely on metrics of purpose and participation in the final environment. Instead, the presentation is measured for fitness and quality, and the project will be given the green light based on proxy attributes—if the image or the person presenting it is clumsy or untidy, the project will be judged as unworthy.

The image of a constructed artefact lives in a different realm from the final constructed form, just as code lives in a different domain from the application. An image behaves differently, has different values, and introduces complications, waste, or even fatal flaws in the final product. None of this would have come to pass if it had been built through a process based on metrics of the tangible.

For example, an image-driven process expresses its value in solitude. It prefers individual merit rather than collaboration. An image-driven building will stand alone in the material used to appraise its value before construction. By material, I mean the medium of the image, such as paper plans, renderings, and rough sketches. It will be expressed as an exterior first, the main entrance second, and the interior last—orthogonal to working on the most commonly used elements at the highest priority.

The flow of life is neglected; instead, only appraisal is respected. The image is about being accepted as a design, not about being acceptable as a product. As another example, an image is not fractal. All features are at the same scale, or at least the scale of a single photo.

As a counterexample, large-scale projects from older times often included buildings commissioned by religious institutions. These buildings are fractal in nature, with many different echoing motifs at different scales. They also seem like they might be image-driven, as they are about making an impression on those beholding them from afar and from within. However, they were not approved for construction based on some picture or plan and then built to spec; instead, they were built according to a typical implementation pattern, with the specifics handled by the developer at the local level. They were not approved based on a literal image but on a need and a common vision.

The image specifies materials and defines the form. There will be extra effort to make the materials work and additional costs approved to make things appear the way they were in the image rather than adapt the image to suit the available materials. The image-driven process’s goal is to be funded, not executed. So long as the project is finished at some point, preferably without a significant loss, it’s a success.

With this static image, extension is unexpected and often denied. Maintenance and repair are used to put things back the way they were, back into the same stressful pose in which they were first manifested, never to improve by change or adaptation. Modification is frowned upon or dangerous due to strict engineering tolerances. The tensions are due to trying to achieve an image goal while avoiding costs.

The image and ego are intertwined. The architect must present an image, and the image must be accepted for the architect to be paid. The architect must make commissions to become famous enough to sign more commissions. It is a horrible requirement that to be well-known means the architect must be visible in the buildings. How challenging might it be to get a renowned architect to design a simple structure without any signature elements?

Image as perspective

Image-driven framing affects the perspective of evaluation. It is how we make snap judgements on attributes we can otherwise only detect after spending considerable time and energy investigating. You can think of the image of a project as a kind of plumage. Creatures rely on being visibly fit to assure potential mates that they carry beneficial genetic traits. We can say the same of projects. An animal with poor plumage will suffer the same passing-over for procreation as the poorly presented project pitch. However, the difference between animals and projects is that genes are less able to lie about their value. It is worse when you consider the lie might not even be intentional.

When we sell a construction project to someone who can fund it, they invest based on what we show them. In addition, they can only fund what can be presented. This distinction explains how some projects get a green light while others fail to find funding, regardless of their relative merits.

We overlook projects that have net positive effects on both the investor and the community they target because the value they bring is hard to present. We often fund ill-conceived projects, which eventually cause more problems than they solve. We sign off with a smile when all the positives are very easy to present. When the negatives are complicated, hidden, or affect elements not generally considered during the initial concept phases, we naturally ignore them.

However, the image-driven system does work for some industries where the pitch more closely resembles the final piece. Film suffers less from this framing when the work is derivative or a well-understood genre. Fine art is another sector that does not seem affected. The work is often completed before pitching, so no translation occurs from image to product. Fiction novels, comic books, and all sorts of things where the scale and interaction are locked in have fewer issues navigating the different worlds of the pitch and the product.

The forms matter much more when interacting at multiple scales and vantage points. Deciding to fund a hospital without reviewing how the hospital works and feels from the perspective of the doctors, nurses, and patients is a recipe for constructing something alien and inefficient. Aerial photography promises very different things from what humans need.

When the product and the image share the same scale, field of view, and viewpoint, what you intend to build is more likely to be what you evaluate when deciding to fund the project. When the final product is a page in a magazine, reviewing it on a monitor is quite close to the final form. It’s still not quite the same thing, which is why many still use paper proofs to spot errors.

Ego-driven construction

Unlike the problems of the image-driven pitch approval process, ego-driven construction can happen on a project whether or not it needs external funding. Ego-driven construction is about leaving a mark or getting recognition for your work.

The need for recognition is strong, but the need to do a good job should outweigh it. On some projects, obvious candidates for a development team can and will be uninvited to join due to their demand to imprint on the project. Bring your skill, not your ego. True virtue does not need to win. Good code is usually devoid of ego. Code signed by an author is either left alone when we should fix it, leaving tension in the codebase, or believed violated when modified away from the original state, leaving tension in the workplace.

Quality comes from an intensely personal pride. It does not come from coin or celebrity—the pride we take in our work is not showing off. It’s about living up to our standards. We have examples of builders who built out of necessity where that necessity called for harmonious, impressive structures only attainable by working under a shared vision.

In the past, the most beautiful and profound constructions were those built for religion. Not just places of worship but also objects, monuments, and songs. Ego did not play such a significant part in their building, even if a person of ego commissioned it. These days, an egoless creation is driven by a need to express an idea or truth. Thus, we hunt for meaning in art because wealth has appropriated simple beauty for conspicuous consumption.

Great work is done on a large scale when everyone has the same goal and creates with the same principles. Everyone must understand what is important about what they are making and why they are making it. Workers could tell they were doing the right thing when building structures as part of a religion, and they could confidently call out and correct those who weren’t. Though some might think of this as a religious or spiritual feeling, it’s simpler and more secular than that. It is to be consumed by an ideal.

But you don’t need a grand vision to create without ego. Creating something that pleases you is selfish, but if it really is for your pleasure alone, it will also be egoless. An old pair of slippers or a worn-out bathrobe is a selfish but egoless pleasure. It’s not glamorous, as glamour is for others, even when done for yourself, because it’s based on cultural expectations. Egoless creation is evaluated on its direct ability to please and satisfy needs. However, egoless creation needn’t be selfish. The ideal that consumes you can extend to your family, friends, and community.

Fear of death

Things become ego-driven when our transient, temporary selves are the beneficiaries of what we create. Not so much our bodies or our way of life, but ourselves, our identities. So, in this form, the ego-driven way of producing is terrible because it’s all about us as unique people who want to be admired rather than about the intrinsic qualities of the result or its benefit to us. How other people view us is a poor metric to measure your life by.

This links to why patterns began to fail. Many wanted to write patterns because they would be recognised as someone who had authored them. It would lead to others doing things their way. People want to be in control. The promise of someone using their patterns was an incredible ego trip, even if they didn’t ever directly see the patterns being used. I believe it’s why so many patterns weren’t as well-curated as they should have been—people wanted to show off their cool ideas or be notable.

Many humans fear death. So many wish for immortality. Being remembered or having your signature on a thing plays into that. It’s about exposure, security, and the chance for more work later. It can be about leaving a legacy. We want to believe that when we have designed something, it was good. We want a pat on the back and a ‘well done’.

A competition about naming things

Even aspiring to a aggressive disregard for originality[PLoPD3-97], the pattern fans were affected by the ego-driven way. The movement trampled its own principles underfoot, turning into a hunt for all the nameable things. Naming a pattern of code or methodology gave better rewards than refining the concept of patterns itself. So, rather than extend and resolve in the way Christopher Alexander did with the transition to the fundamental properties of forms, the concept became diluted and diminished under the hooves of the many who wanted their names on articles.

Naming became the game. Naming became the value. This naming aspect became so prevalent that many now see it as the point of design patterns. I’ve protected the identity of the author by rewording this statement, so it cannot be web-searched, as I don’t wish to persecute anyone, but this opinion is not rare:

Design patterns allow you to communicate software design by offering you a common vocabulary. As a member of a team familiar with design patterns, discussions about design become very productive.

As an example outside of software:

Can an architect effectively discuss a house without using patterns for different types of rooms? When they say, ‘a house with two bedrooms, a kitchen-diner, and a bathroom’, they provide a lot of information in very few words.

Software design is the same.

Source intentionally obscured.

Hopefully, anyone who’s read this far can see the problems with the statement. The first would be that there are no bedroom, kitchen, or bathroom patterns. There’s not even a pattern of the house. Houses are a given. They already had a name before any patterns came along. Patterns name the relationships or organisations of elements given a problem. But clearly, a lot of people are listening and nodding along. Names are essential, and being able to name things is a powerful tool. Being able to communicate meaning enables other perspectives, evaluations, and interpretations.

So, names are valuable. Why not let this be? Why not recognise patterns as both naming things and problem-wisdom too? We should be wary because this dual meaning led to its absorption into the dominant meme of naming things. When design patterns are just names, we under-develop them, perpetuating the problem. And when they are names, they inevitably refer to the solution, the goal, not the problem.

The pattern people may have been looking for behaviours to claim, hunting for new, valuable, and unique things to give names to. They didn’t take the next step, to combine and refine the spoils of their hunt to distil the fundamental properties of software patterns. They simply continued to hunt. Their task drifted into cataloguing and publishing articles with named authors. And there lay the ego: in the name at the end of the article or the front of the book. So, the race to extract value destroyed the reward.

Predator patterns

Christopher Alexander claimed an “I” lies behind all living structures[NoO4-04], though not all structures. He often wrote about structures that were dead, or at least not living. One interpretation of this is to think of the structure of things in the world as being alive when they are a product of a living process or at least a process that does not destroy the wholesome living structures.

But we live in a world constructed by living things. All structures must be living, even those Alexander considered dead. I believe he failed to recognise the life form. For one thing, these new building forms are propelled by something; their form is self-constructed. The style emerged from the environment, so they fit the bill of being at least alive in the sense that the system creating them is self-propelling and self-maintaining.

In his last book, The Battle for the Life and Beauty of the Earth[Battle12], he names the beast generating these structures ‘System B’. But he does not acknowledge that the system is alive with natural defence mechanisms and self-righting feedback forces, despite their clear presence.

Christopher Alexander tried to rekindle life in structures and buildings but may never have sought out the predator or parasite killing them in the first place, even when confronted by it. He definitely recognised the efforts by System B enough to list1 many of the effects in The Nature of Order, Book 3[NoO3-05]. A thing as good and powerful as the natural building process does not fade without reason. The natural unfolding process must have been attacked and subjugated, passively or actively.

Living processes thrive and survive through the creation of wholesome entities at each step. What do these other processes create to increase their ability to grow and continue? A good candidate is money. Money equips you with the power to control others directly while not supporting them in their processes. Money can hire labour to act without ideals. Production techniques developed over the last century have removed craftspeople who can create to a dynamic specification and solve local problems. These individuals were replaced with robots that follow instructions and fail as soon as there is a deviation.

Money uses image-driven creation because of distrust. A building developer will have goals but believes the builder can’t be left to resolve them, so they require an image before signing off. Because they are not trusted, a builder becomes untrustworthy. When they no longer share goals and diligence goes unrewarded, there’s no point in doing more than what they are paid to do. The further the builder is from the client and the less control they have over the implementation details, the less engaged they can be in the project. Money creates a care gap.

Image-driven construction gets things built fast. Even though they may be the wrong thing, they are complete. The contract is fulfilled, so everyone gets paid. If we are used to image-driven construction, it shifts our expectation towards seeing faster turn-around or reduced costs as a measure of success. Our goals shift from building the right thing to building within the contract as swiftly and cheaply as possible. When a developer buys land, they reward builders for being quick and cheap, so anything that slows the development down is cut or ignored. There is no clear benefit in trying to do or be better.

The core predator pattern is that of using expense reduction as a goal, not requirement satisfaction within a budget. It’s an abnormally attractive pattern and easy to slip into at all levels of any development. Getting a project right needs to be an effort that is coordinated at all levels. However, reducing costs and increasing pace only requires a team to put some effort into optimising costs and processes at the local level. When a team member finds a shortcut, they look more productive; their shortcut becomes the new baseline against which others are measured, and so the pattern spreads throughout the process.

‘Cheap and fast’ is a contagious goal. It’s a predator pattern of construction. It destroys other patterns by supporting the creation of many other anti-patterns forming around the forces of cost-cutting and metrics based on money rather than value.

Naming things

Some predator patterns are attractive. They pull you in and lure the unsuspecting into supporting their grander schemes. The systems operate unnoticed. Natural responses and motivations reinforce them and normalise their impact.

We name stars and comets. We name elements. So many of them are named after their discoverers or things dear to them. Naming things is important to people. It’s fame and legacy again. We want recognition of our purpose. The predator here is the feedback we get when naming things. It stirs us to find more things to name rather than discovering the connections and unifying or refining what we already have.

‘Coining the phrase’ is an anti-pattern . ‘Naming things’ for kudos and fame is an anti-pattern that affected the pattern movement quite badly. In fact, ‘leaving a legacy’ could be the prime predator pattern. Many poisons derive from the desire to leave a mark on the world—to escape death.

Denial of images and ego

The pattern based processes for designing deny this image-driven approach completely. The establishment, littered with people with reputations for designing beautiful buildings (even if they were never built), could not tolerate such impudence. Patterns contradicted their values because they were about place-making. They did not merely design a building, but made a place for people to inhabit and within which they could grow. Paper designs would not survive this assault, so they fought back.

The idea that a building was only good if built in a context also ran counter to the image-driven process. Patterns required an architect to aim to make a building fit its surroundings so well that it would not look out of place. Patterns invited designs that conformed and complemented rather than being curiosities and contrasts. This, too, was unacceptable for those wanting to leave a mark or a calling card. The pattern movement had to be stopped before it stripped all possibility of fame and reputation.

In a defensive manoeuvre, the establishment turned its back on Christopher Alexander. In some places, they halted projects before they were fully underway or claimed he was a traditionalist, denigrating his work as primitive and regressive[NoO4-04]. In other places, the establishment would bribe and cajole people into not working with him2 or demand strange requirements that were never asked of anyone else working on a development. Over his career, the establishment turned from loving to fearing, to shunning him, and then to indifference. There are still pockets around pushing the principles, but without perceiving the predators, the pattern based process will permanently remain prey.

Fame, fortune, and out-of-print books

Recognition and approval are powerful motivators. They’re a form of immortality for some and security for others. Either way, the result tends away from the natural process of building and towards a situation where the individual creates problems for the users of their products. In the case of books, authors wish to claim they have written a book; they want some authority.

There is no big money in writing books and even less in updating them. But being published is priceless. Well-loved books create talking opportunities and offers to work at better places for more money. This leads to an interest in holding the fame tighter than looking after the work. This predator pattern affects the motivation and integrity of authors3 of non-fiction works.

If you’re enjoying reading this and want more of my ramblings, I’m available for conferences and consultancy. Just drop me an email at sellout@unresolvedforces.com.

The draw to consultancy is a predator pattern too. The immortality associated with having an authoritative impact has an alluring aroma. Consultancy allows the individual to shed some of the drawbacks of failing, as it’s difficult to fail at telling people how to do things. But it’s a trap as a consultant becomes detached and has to start acting on instinct rather than experience. Eventually, they make the same mistakes as those without experience, but at least they are still paid for the advice.

These outcomes run counter to making a book available cheaply to others or ensuring the information inside the book is unrestricted to all who need it. They run counter to the principles of patterns, discovery, and dissemination.

There were also issues with the goals driving people to write them down. Developers enjoyed being part of the pattern movement because it was meaningful to them. But to elevate from bystander to credited pattern finder would be even better for so many. So, developers hunted down patterns and hoped to find something that could get them published. They tried to find patterns in their work. They tried to guess at patterns outside of it. This was the fork in the road where the software design patterns problems really began but could have been avoided.

When a developer’s goal is to find new patterns, they will find something they recognise as a pattern. We are, after all, pattern recognition machines, for the most part. If you plant an idea in my head, I will see it if I look hard enough. Consider the Kuleshov effect, where a short movie clip primes us to assume a particular emotion on a neutral face. We can accidentally use the wrong pattern because it was the first one we thought of or because we require finer discernment to see how it doesn’t fit. A developer on the hunt for patterns cannot see how their patterns are not patterns. They may be idioms or simple consequences, or even anti-patterns and recurring failures under a different lens. Just because something happens repeatedly doesn’t mean it’s good, and it doesn’t even mean it’s self-forming.

Some developers appear to have found patterns through wishful thinking. The general rule was to only include those which had been seen at least three times. Compared to the architectural patterns in A Pattern Language, software design patterns are seen far less frequently before we document them. All the two-star4 physical building patterns had plentiful examples. Some software design patterns, as documented, have only a single example. Others, none at all. We should only consider regularly witnessed patterns or those providing numerous concrete examples to better show commonality and variation.

Many of the most famous software design patterns had not been observed more than a handful of times before they were included in the catalogue. Once documented, they became easy to spot and suffered from the frequency illusion, also known as the Baader-Meinhof phenomenon. They were seen everywhere, even where they didn’t exist. This is another reason why the patterns in A Pattern Language[APL77] seem so different. Those in A Pattern Language are the kind which are seen repeatedly without anyone hunting them down, and they all appear to be self-starting patterns.

By inventing situations they wanted to see, software design pattern hunters claimed patterns so other people might begin to see them too. Or they claimed negative or abusive recurrences as patterns because they had lopsided benefits that tilted in favour of the author.

Command and control

Command-and-control begets those who want power. It’s a pattern borne of fearful insecurity and a desire for enduring life. It’s one of the most natural predator patterns, and only trust can counter it. But trust invites dangerous reliance on the unreliable. Any organisation will likely be risk-averse, and every time we reward faith with pain, the system’s assumptions will bias further towards control over guidance. Vision will turn to direction, and goals will turn to tasks. The burden of doing what you are told to do rather than what you believe should be done removes your drive to do the right thing. Then, your actions betray your intention; you do what is asked and prove them right. You can’t be trusted to do the right thing or speak up when you see someone doing wrong. The circle closes, and you dream of being in charge so you can do it better. You are now the next generation of those who tell rather than organise, demand rather than inform.

The accretion of policies by an organisation in response to problems could be called the scar tissue5 of wounds inflicted. The policies impede the flexibility of the organisation. They are a history lesson in the things that caused it pain in the past. And as with history, an organisation will not want to forget its lessons. The policies are a form of lost trust. They deny freedom for fear of future trespass. We see it with setback laws and planning permission, laws created because at some point people took advantage, and now cannot be trusted.

This predator pattern is the ‘lack-of-trust codified’. Written rules and laws hold us back from being allowed to make local judgements because it’s ‘quicker-and-cheaper’ to just rubber stamp a denied on a request than to carefully consider cases. The predator pattern generates variations of this anti-pattern wherever it hunts. ‘quicker-and-cheaper’ wins out over reasonable and considered.

Viral infrastructure

There are some anti-patterns that, while self-forming and systemic, are repulsive. They create unmistakable problems, but the root cause is difficult to abandon.

Personal transport is one such pattern. We’ve seen it grow slowly but inexorably over the last century. Whole cities have become owned by cars instead of people. The proliferation of suburbs was made possible by cars, but cars were reinforced by their existence in return. A pattern of mutual reinforcement such as that of cars allowing people to work further from home means more workplaces can grow, pulling in more people from further away. Shopping malls cater to more people. They have large parking areas, which are only required because transport infrastructure wasn’t built fast enough, so there are many cars and few people. The infrastructure to create all these highways takes up space too, meaning there are gaps between places that would have previously been within walking distance, reinforcing the need for a car.

We see this anti-pattern in software development as well. Many sites build up a web stack from a slow wrapper around a database, which requires more instances, load balancing, monitoring, fail-over, and virtual machine on-demand spin-up. But sometimes, all of that could be a single process on a shared device if the basic structure of the data stored was more efficient6. If your data were ‘walkable’, the collection of servers would not be necessary. If we start small by thinking about what we need now and optimising what we have rather than scale, we can get very far. But copying everyone else’s scalable architecture is cheap, intellectually, which can be fast to market.

There is, of course, a limit to this. Even in the real world, we need trains, buses, and highways between our walkable cities. It’s just a matter of how quickly we jump to large-scale solutions. We don’t catch the bus to the corner shop, even if there were a stop outside both your house and the shop. But we put up with a culture of getting in the car to travel less than a mile.

Regression to meanness

Broken windows and workarounds set a level of expectation. The anti-pattern of ‘deviation-to-low-standards’ can only be countered by actively working against degeneration. Not thoroughly investigating the root cause of a problem leads to hopeful fixes. These workarounds are faster than fully understanding problems and set a level of expectation around how quickly future issues should be resolved. The system rewards those who turn around fixes fast, not those who never create problems. The culture of workarounds breeds agents who prefer cure over prevention.

Design patterns suffered from many of these anti-patterns. Patterns became burdened by applying layers of corrections and increased numbers rather than digging deeper to find the fundamental forms or even create generative languages. The pattern movement suffered from naming and money creating weird incentives. The regression to an easier form for the sake of teaching led to the design patterns lessons we see being repeated today, misunderstanding the original GoF patterns and interpreting them in shallow and solution-oriented form.

All this left the software design pattern movement as one that pushed people away rather than attracting them.

1

which we will return to in the How Do We Fix Patterns section

2

Many examples of these incidents are recorded across Christopher Alexander’s publications, most notably in The Battle for the Life and Beauty of the Earth[Battle12], but also in the Production of Houses[TPoH85] and some implicit in The Nature of Order, Book 3[NoO3-05].

3

Don’t worry, I’m clearly immune. I keep releasing my content for free.

4

The none-, one-, or two-star mark Christopher Alexander used in A Pattern Language[APL77] to show whether the pattern was, to them, a draft, a very likely pattern, or almost certainly a pattern where all wholesome solutions would include its invariants in some way.

5

I picked up this term during a conversation, but it seems to originate in REWORK by Jason Fried, in the section ‘Don’t scar on the first cut’.

6

If you understand your data, you can drastically cut your hardware utilisation and response times.

There is only one book

Searching the web with questions like ‘How many design patterns are there?’ provides evidence for 22, 23, 26, or 30 patterns. You need to do a bit of digging into other sources before you find references to all the other books and repositories. For many, and I believe the vast majority of developers, there is only one book on design patterns. All the rest have been hidden from the mainstream behind the popularity and success of the GoF book[GoF94].

When developers learn about the GoF patterns, they can end up thinking that’s all there is, so they attempt to put all their different problem pegs in the same square solution holes. Some are square, so they fit and become examples of how good patterns can be.

But some are round, rectangular, or arched, and they still fit, but something is a bit off. The patterns are stretched, but the developer is still doing the right thing as far as they can tell. They have problems; they find a matching pattern solution. If the pattern seems slightly off, it’s ignored or assessed as the design pattern not being used properly.

But then, there are those problems that don’t fit into any of the solutions at all. At that point, the developer either thinks the problem is unsolvable or builds something convoluted. Because of how they learned about patterns, developers haven’t learned to solve problems, just apply solutions. There could be patterns out there to help solve the problem, but when you only know of 23, and you know there are only 23, why would you go looking?

Many problems are repeatedly re-solved because any patterns that could help are hidden behind the smoke screen produced from the fame of the GoF book. The book even managed to overshadow the significantly more nuanced Pattern-Oriented Software Architecture[POSA96] series, which is a great shame, as POSA was long-running and includes substantial improvements to the core concept of software design patterns.

It’s possible the GoF book had a positive effect at the time and caused many low-quality patterns to dissolve. The other pattern books lasted quite a while, staying around for a decade or so. Then, they went out of print, were ignored, or were used as examples of how the whole design pattern movement was a hype-driven marketing program to sell books and conference tickets.

The negative consequences of hype are hard to exaggerate. With a large proportion of the software development industry passively ignoring the available patterns, they were heading in the wrong direction. When academic institutes attempted to validate the patterns and found them wanting1, industry sceptics got the ammunition they needed to increase their ranks. The GoF book has been at the centre of a series of events, giving the whole movement a bad name.

But at least design patterns have a name.

They’re still around because the book was wildly successful. I’m writing about them because their ineffectiveness was a problem with real-world consequences for our teams and projects. In some cases, you could honestly say it was our fault for not reading the book thoroughly. However, when someone tells me I wasn’t following the instructions well enough, I refer them to Donald Norman or the Neilson group of usability experts and remind them that user error is not an indicator of a correction to be made to the user.

GoF categories boxed things in but still looked at the implementation side of many situations. When you consider Factory Method versus Strategy and wonder why one is creational and the other behavioural, are you claiming creating isn’t a behaviour? So much of the GoF Book stems from being an object-oriented design book. So much needs revising. And no, I’m not just talking about Singleton!

The consequences of the book are positive and negative. I’m sure Christopher Alexander had more sales of A Pattern Language[APL77] to programmers than he ever had to architects. I’m sure his net worth was higher because of the GoF. I’m also sure we’ve created an environment that is both hostile and receptive to new works in the design pattern space, but the name ‘design pattern’ is polluted. We turned design patterns into a way of describing recognisable constructs that fulfil a design need. We use the term to describe the structure of forms, web pages, and other ideas and arrangements in general. The term has lost its teeth.

The missing point of reference

Maintenance can be difficult even when you know the structure of the code and data and why it’s built the way it is. When you are looking at unfamiliar code, it can get much harder. Why is this so? We can walk into almost any house in our neighbourhood and expect to find features in common with our own homes. The design is shared. The design is settled. There’s a long lineage of small adjustments to the design of houses over the ages, but standard components have remained unchanged for centuries. With code, however, so much of the design differed between similar programs, familiarity was highly unlikely. I use the past tense here because things have started to change.

In a house, we don’t design bricks, windows, stairs, or doors. They are the raw materials in most modern homes. Instead, we lay out the rooms and dress them. But there’s only really one layer to the design. In code, we don’t design the assembly instructions, the standard library of our language, or the operating system it’s running on, but we do create layers of design above them all. Your to-do list app will be a UI layer on top of a business logic layer that sits atop a CRUD1 model, which may sit inside or on top of a database access layer. These common layers are familiar to many. They feel like home, so when you find out someone has built something without using them or using something different, it feels alien.

Ancient Greek literate programming

In the past, we lacked these standard materials. We only had bricks. Even though many languages are vying for attention in the full-stack space for developing web, mobile, and desktop applications, they all share similar development and deployment patterns. The request routing, database-backed, CRUD-based, event-driven, model-centric approach to transactional applications is a pattern language. It’s not beyond unreasonable that The Twelve-Factor App could be a pattern language. These layered designs, which we instantly recognise, are the closest we have to classical Roman architecture.

Unfortunately for software developers, these programs only make up a portion of the programs written today. Most embedded development uses a different model. Desktop content creation tools have a different model again. Almost all computer games have a very different structure, and there are many different types of game leading to differing sub-structures at that. There are many other domains with similar stories to tell, far more than I know of, I’m sure. Programs for each domain tend towards a particular style of architecture, but there’s no universal pattern language. Each domain has its own.

This means that for software, there is no singular pattern language. There are many. The search for software patterns and a software development pattern language was doomed to fail by starting from a paradigm. A book on the design patterns of object-oriented design would be like an architectural pattern book on working with bricks. A paradigm is a method or tool you can use to solve your problem; it’s not a problem in itself2. The benefit of patterns was a design process unencumbered by a specific solution.

Why are there so many domains in software when there was only one domain in architecture? Well, the secret is, that’s a false assumption. There are many domains in architecture too. The book is called A Pattern Language, in the singular, for a reason.

Let us finally explain the status of this language, why we have called it “A Pattern Language” with the emphasis on the word “A,” and how we imagine this pattern language might be related to the countless thousands of other languages we hope that people will make for themselves, in the future.

— Christopher Alexander, A Pattern Language, p. xvi.

In many of his projects, Alexander selected and generated patterns constituting a specific language to guide the development. For example, in The Production of Houses[TPoH85] there are 21 patterns in the pattern language with variants specific to the context. The pattern Northeast Outdoor Space appears to be a variant of 105 SOUTH FACING OUTDOORS but with attention given to the needs of the location including both the impact of sunlight and protection from the wind. The patterns Front Porch and Back Porch are also unique to the project. The first was part of the initial language and the other only emerging later when working with the families.

In [Battle12], the pattern language is presented quite differently to that of [APL77]. Gone are the cute, terse names of the older patterns. Instead we have simple but to the point sentence sized descriptions with extended paragraphs to fully enclose their ideal. I saw this as positive progress for patterns simply because good names are impossible to find.

The patterns in A Pattern Language[APL77] are those found in the realm of liveable spaces. The patterns don’t fully account for military installations, medical facilities, prisons, or schools. Some of them apply simply by virtue of being applicable to almost all buildings made for human habitation. However, the patterns in A Pattern Language never address the layout of airfields or ships. The book applies to a domain. Not all architectural domains, but just one that is rather large.

Software inhabits domains too. Each program generally belongs to one domain. The desktop application, the performance-critical batch processor, and the highly available network cluster management software each have their own pattern language—their own collection of patterns found in the development of a cohesive system within their specific domain.

But there’s absolutely no mention of this aspect in the GoF book[GoF94], and it’s only lightly touched on in the first POSA[POSA96]. So, there are no software architectural patterns because these were the days before you would find many common software architectures in object-oriented codebases. The patterns were all too generic, too fundamental, too much like bricks and screws and not homes or community spirit. The first book I found that started the ball rolling on constructing a complete pattern language was Patterns of Enterprise Application Architecture[PoEAA03]. It’s probably not the first, but it’s the earliest example I found that was close to A Pattern Language in form and function.

Later works reside in different domains with many patterns that help guide you through creation or transformation. Examples include:

  • React Design Patterns[React17] 
  • Node.js Design Patterns[NODEJS16] 
  • Language Implementation Patterns[LIP09] 
  • A Scrum Book: The Spirit of the Game[AScrumBook19] 
  • Patterns in Game Design[PiGD05] 
  • Cloud Native Transformation[CNT19] 

But other pattern language books, such as The Surprising Power of Liberating Structures[LibStruct14] and Fearless Change[Fearless04], were eye-opening because they inhabited an entirely different realm of interaction. Liberating Structures attempts to solve the problems of enabling or coaxing communication out of a group of people when they feel embarrassed, anxious, confused, or stuck. Fearless Change provides patterns for supporting the problems encountered when attempting to bring about meaningful change in an organisation—tactics for overcoming the inertia inherent in established communities.

In both books, there was no product, no building to inhabit, only the minds and culture of a group to transform. But they are pattern languages because they take commonly repeating problems in their domains and provide details on the types of solutions, the consequences, and how they fit into other patterns.

Without pattern languages, it’s difficult to find your way into patterns. It’s hard to find the right place to start when every pattern implementation starts and finishes without an apparent next or previous step in the sequence. The absence of a higher-level pattern structure or a dependent consequence of pattern application leaves the user stopping and starting, lurching through development.

1

CRUD stands for Create Read Update Delete, the fundamental activities in a database. The term turned into a way of thinking about what you need in any record-keeping technology—a way to insert new data, retrieve it, amend it and remove it.

2

I have strong opinions about OOP, but even I wouldn’t say it was a problem which needed to be solved.

Isolated patterns

Christopher Alexander’s A Pattern Language[APL77] claimed it should not be read in isolation but along with The Timeless Way of Building[TTWoB79], as they were really one book in two volumes.

We have been forced by practical considerations, to publish these two books under separate covers; but in fact, they form an indivisible whole. It is possible to read them separately. But to gain the insight which we have tried to communicate in them, it is essential that you read them both.

— Christopher Alexander, A Pattern Language, p. ix.

When people just read A Pattern Language, they can come away thinking the benefit of such a system is in cataloguing the patterns and how to reference them. However, if you read The Timeless Way of Building, you realise there is much more to it. One book teaches the laws of physics; the other is a book of facts. Both are valuable, but you learn how to think from the first, and in this case, the first was The Timeless Way of Building, but it was published second1.

Much effort[APL77] went into finding and deciding which patterns were valid and which were merely wishful thinking. Christopher Alexander and his co-authors spent eight years and wrote thousands of words to help us understand what a pattern should mean to people. And above all things, they taught us about ‘the quality without a name’, which was fundamental to discovering new patterns and refining them in the architectural domain.

If you read these two books—and if this subject has intrigued you, then perhaps you will—I recommend reading The Timeless Way of Building first. The book invites you to skim-read it, only taking in the headings. Your first read will take only an hour or two. Once you have read The Timeless Way of Building as deeply as you are willing, only then skim through A Pattern Language. However, I cannot help but also recommend you take the time to read the much smaller book, The Oregon Experiment[TOE75], as it makes many of the processes and patterns in the other books concrete (sometimes literally).

I can’t know, of course, but I believe many software developers spent some time reading through A Pattern Language but never sat down with The Timeless Way of Building. Many pattern fans appear to have come away with a comfortable set of templates. Their patterns have a standardised form, much like those in A Pattern Language, but they lack a reason for having been collected in the first place. They lack the larger interconnected contexts.

In many cases, we see the absence of purpose where a pattern is documented but otherwise useless. The pattern might happen repeatedly, but it’s too niche. Or the solution is too obvious. It is so obvious that even though it’s a niche problem, it was resolved the same way each time and nothing unexpected was learnt.

In the patterns of A Pattern Language, we see subtle references to ‘gotchas’ and validation criteria. The patterns are rarely written as how-to guides or step-by-step instructions on what to use or how to implement a solution. Instead, they are arrangements and concerns you must address.

Pattern fans should have invested more time in understanding the core principles of patterns, but it seems they spent more energy on acts supporting their position in the movement and signing their name. And, as systems theory predicts, a low-energy commitment with high reward is an attractive behaviour. Accordingly, others followed.

Design patterns were never meant to be identified as reusable elements separate from each other, but so much of the work was done by looking at the parts, not the whole. When engineers try to understand machinery, their documentation includes diagrams of how things work. They often have two types. The first is the diagram of the mechanism as a whole. The other is an exploded view of the mechanism, with lines showing how the parts fit together. Most of the early design pattern work spent time reviewing the pieces in an exploded view, and only some of the pieces at that. Without the whole picture, many readers misunderstood the patterns as concepts they could introduce as new elements into an existing system. The truth about new elements emerging from the old was lost along the way.

1

Except, it was actually the third, The Oregon Experiment came first in 1975, then A Pattern Language in 1977, and only later, in 1979, The Timeless Way of Building. Such an unfortunate ordering.

A pattern communicated badly is not a pattern.

A pattern is only valuable when it provides you with the wisdom to know what to expect and how to progress towards a solution. When it is communicated badly, it can hardly be called a pattern. When the problem is not precisely delimited, the pattern will be unrefined and used in more places than it should. When we use a pattern beyond its limit, we dismiss it as lacking because it fails to provide the deep insights we demand from a good pattern.

Patterns can be communicated poorly by not conveying their value. But they can also fail to reveal their value at the right moment. This second part affected the patterns presented by Christopher Alexander as much as it did the software design patterns movement. The patterns in A Pattern Language[APL77] are organised by scale, arranged in a sequence of how they interact. But this sequence is awkward for someone trying to build according to the patterns. The book is organised to educate a reader on the patterns available, not to use as a guide to construction at any particular scale.

Writing to be consumed and keeping the reader in mind is a pattern of its own—one for authors. I know I had to learn it myself and pattern authors are authors, too. The pattern is repeated in many books on how to write as keeping your audience in mind, thinking about what they would value. In this case, pattern users needed some structure for their projects. And this is why the late arrival of The Timeless Way of Building[TTWoB79] was unfortunate. The book provided the necessary motivation and guidance but somehow became the less-referenced text.

But what about not communicating the value itself? How can so many patterns be presented and yet their value not revealed? This happens when a pattern is defined by its solution. The greater the specificity, the harder it is to adapt the solution to our specific problem. The weaker the problem definition, the more effort it will take to ensure the solution satisfies our problem and the more the patterns begin to overlap.

At this point, I want to take a detour into another problem—that of overlapping patterns. There’s nothing inherently wrong with patterns overlapping, but it’s the type of overlap that presents the problem.

Think of it this way: the problems addressed by patterns are generally about complexity. Complexity is about interactions, and the web of interactions spreads far and wide in any actual project. You cannot put a boundary where the interactions stop because they don’t stop. If they did, it would be two separate systems. But patterns help you solve two things. The first is where to put the boundary by asking, ‘Where are the interactions more tractable?’ The second is how to resolve the more complex interactions within that boundary. Thus, the reason why patterns overlap is that they must touch at these more tractable interaction boundaries. This way, we design with fewer variables in mind at any one time.

Pattern Overlaps

Pattern D strongly supports B, C, and E by reducing their overlap.

Patterns should overlap in how they constrain each other. When they don’t, there’s a gap. Another, smaller pattern should fill that gap. But patterns should not overlap in such a way that they solve the same problem. If two patterns are defined by the same problem, then why are they separate patterns?

When you overlap patterns well, the problem being solved is central to one pattern. The other problems around the first problem have their own patterns, which help constrain the solution. But when you have badly overlapping patterns, you find that the patterns are trying to solve the same problem and have different strategies. At that point, you are not using all the patterns together to solve a problem; you are choosing from patterns to solve problems. This is an immediate indicator that what you are working with is not a pattern.

There are quite a few software design patterns that fit this description.

Consider internal and external iterators; the twin patterns of who is in control of the iteration. What is the problem here? Is it the same problem? If not, why did we map them to one design pattern? Now consider the visitor pattern and the interpreter pattern. Are these the same overarching pattern, or do they simply overlap? Does visitor do anything interpreter doesn’t? Depending on your situation, the problem will belong to the domain of iteration techniques or callback techniques. In either case, the problem is all about operations on containers. There is only really one pattern of this, but there are many points of wisdom about it. The problem can be refined when we learn of other forces constraining the solution, such as the pattern of a plugin architecture. With this pattern in place, you need to provide a way to iterate over the containers without necessarily knowing the container type, so you are immediately reduced to iterator factories or callback-style visitor approaches to iteration. This isn’t deciding between different container element visitation patterns; this is deciding which aspects of the problem are constrained by other patterns or which parts of the context significantly change the pattern response.

All that aside, we have to contend with not just the producer but the consumer too. Communication is a two-participant activity. People cannot tell whether they have understood if there is no opportunity to verify their interpretation is correct.

We make this worse when we publish solutions. When you present a solution, you give the answer to a question. You are not providing your ‘working out’. In many patterns, the reasoning is the value in the solution, not the solution itself. Clients pay for solutions. Design patterns are not for clients.

You cannot ship a pattern as a solution in itself. When you attempt to show a pattern via a solution, you miss the reasoning, and the person reading about it does not fully understand why the solution was appropriate to the problem. I find myself repeatedly returning to one of the earliest complaints about design patterns: they were solutions in search of problems.

Unfortunately, even A Pattern Language[APL77] presents some patterns as solutions and names many of the patterns after aspects of the solutions. When we want to teach someone something of value, we don’t furnish them with an answer. This basic fact of teaching should help us understand why the current form of software design patterns is poor communication. We’re not showing our ‘working out’. It is the process of reaching the solution that was of value all along. In the case of patterns, the working out would be the considerations, wisdom, and principles guiding the solution. It is also the steps, the refactoring, providing evidence of the pattern as a process as much as a thing.

Software developers need to know:

  • when constraints will no longer apply,
  • when new constraints will,
  • how any deductions were made,
  • upon what assumptions the decisions were based, and
  • which constraints were not considered.

With this information, developers can determine if the assumptions came true in their case. They can know whether the constraints still apply and whether any new observations have rendered the deductions invalid.

Most software engineers look for shortcuts and automation to leverage value faster. The GoF did the same by cataloguing what they saw and providing it to others to consume. The problem is they collected the patterns and published them, but we needed to understand patterns in general to use them properly. When the book[GoF94] was published, the GoF were not inexperienced developers, but they seemed to get caught up in keeping track of the results and not in the motivation for why the patterns they found existed in the first place.

Clear communication is absolutely fundamental to the value of design patterns. I want to reinforce the importance of understanding that communication is a two-person sport. It’s a collaboration between two minds. One mind takes something of value, the pattern they found value in, and translates it into words in the air, on paper, or on the internet. The pattern sits dormant until someone else reads it, converting it into meaning in their mind. The pattern’s value is reduced or lost if that translation breaks down during either stage.

We should commend the efforts of the pattern community to provide shepherding to those wishing to write patterns. Ensuring patterns were written down such that they could be understood without the author present is obvious in hindsight. However, software developers’ appreciation for the distance between the author and the reader is not deeply instilled. I’m sure only a sparse few pattern writers had any formal training in technical writing. I know I do not. I rely on editors and others to help me make sense when I write.

What the shepherding process did was reinforce the slowly diluting principles of patterns. The lack of a problem-centred design impeded their efforts without them even knowing. They concentrated on ensuring the authors defined the forces, constraints, and solutions well. However, because the movement worked with a solution-centred approach, most patterns either failed to define a root problem or explain the unique wisdom behind their particular solutions.

Design pattern dance party

When you have a big family party, such as a wedding or a large get-together in a very family-friendly environment, there will be adults and kids sharing entertainment. At these street parties or celebrations, you will often get music and embarrassing dad dancing. But you also get children copying the adults. May I suggest something ridiculous? The children are like those reading patterns from a book or website, or being taught about them by someone who thinks applying a specific pattern would solve their current problem.

The children see a finished form of the dance—the performance. They see the adults moving their arms and legs, so they move their arms and legs. They see them lift their arms above their head, so they do that. They see them shake around, so they copy and shake. What they do not see is the rhythm. They do not discern the repetition with variation. They do not grasp the value of keeping time. They do not realise the importance of sticking to a particular area of the dance floor. The children are a mess of waving arms around and wiggling bodies. As far as they can tell, they are dancing, so they are happy and proud of themselves—just like people who first learn design patterns. They can see and understand the solutions, so they begin to apply them wherever they think they fit, not even knowing there is a rhythm to their use.

And then there’s also the fact that the children are watching the other children dance. When they judge the other children, they think they are doing fine. Ask a seven-year-old what they think of an eight-year-old’s dancing skills; they will likely believe the older child dances exceptionally well. This also applies to our budding design pattern user. Surrounded by other people picking them up one solution at a time, they will be commended on their use of patterns by those equally unqualified to discern good pattern usage from bad. Given how few people deeply understand patterns, there’s only a meagre chance of being in a crowd with someone who understands when not to use one. This leads to a spiralling reinforcement of poor pattern usage, overuse, and looking down on people who don’t use patterns all the time. It becomes something of a cargo cult.

Each child and each developer think the other is doing well. All pat each other on the back while generating over-engineered code and horrendous dance moves. They’re using too many inappropriate patterns at the wrong time or making inappropriate patterns with their movements and irregular timing.

At some point, the children grow up and start to discern better. They develop an eye for what makes a good dance and learn the value of rhythm1. They look back at old family videos and laugh at their earlier source code. We should not blame ourselves for being naïve. It’s a given. We should, however, not enable or reinforce the bad behaviour for long. We should find a way to teach the ability to discern more than provide the answers to questions.

The authors of many patterns must have been in a similar situation. They were shepherded by other writers who had been through a process but maybe not entirely understood it. Misunderstanding is more contagious than understanding. Slowly, more and more people understood the core principles less and less, but they could talk about them with great confidence while gyrating wildly in front of other developers.

1

Well, some do. Others become dads.

Deviation and dilution

About half of the first paragraph in the ‘What Is a Design Pattern?’ section of Chapter 1 of the GoF book[GoF94] was given over to a description of what a design pattern is outside of software development. For programmers to grasp what the term meant, much more was necessary. This most famous book on design patterns was published so bereft of detail or nuance; no wonder patterns have been so misunderstood by so many for so long.

The writers introduced the term a few times but never presented it with its foundations. I believe this was a mistake, as different people interpreted the words differently over the years, and the critical aspects of Christopher Alexander’s work were diluted and lost in the excitement.

Because the GoF book[GoF94] only references implementation-level object-oriented design patterns, a problem arose in the way they were understood as a concept. Imagine a famous fantasy book referencing elves and dwarves, orcs and wizards. Imagine it became such common knowledge that people assumed certain traits of fantasy books before they even read a synopsis. The success of the GoF patterns had an unintended consequence, accidentally suggesting that design patterns don’t work outside of object-oriented design. It was a strange side-effect but a mildly damaging one for other programming paradigms. No true elf would write production code in a functional language such as Haskell. The GoF patterns’ success also caused the meaning of design patterns to stick to a specific abstraction level. Modular architecture and team development practices are sorcery and dark magic, and as such, they have nothing to do with the science of design patterns. Other books vehemently contradicted this narrowing1 of meaning, but their popularity was insufficient to overcome this common misunderstanding.

Being the most famous book on software design patterns, it set the meaning of design patterns for the whole industry. Now, we have a diluted term and a hard time finding the right words to describe the real thing.

The term ‘design pattern’ is now a buzzword for ideas or practices. You see it used as a prefix for collections of ideas. Instead of 114 tips on how to clean up your network resources, you might see a Network Resource Design Patterns Handbook. Instead of a book of collected best practices for recipe layout, you get Design Patterns of Cookbook Construction2. Non-pattern design pattern books dilute the meaning and leave the pure patterns books vying for shelf and mind space. Genuine design pattern books are crowded out by the better-performing, more marketable content. I don’t wish to devalue the content of the better-performing books, but I do want to acknowledge how they hurt the specificity of the term ‘design pattern’.

The sting has been taken out of the term, and that’s both good and bad. Some people think design patterns are awful, so a new book on a subject containing design patterns might decide to hide it to some extent, as it might put off potential readers. Wait, is this why I called this book Unresolved Forces?

The power has been taken out of the term, leaving us with a problem. How do you refer to outstanding design patterns that are pure and revelatory? What do we now call powerful, self-forming, and reinforcing patterns of interaction in a building space? The next term along is meme, which has itself been co-opted by GIFs and ‘Rickrolling’. These terms themselves are not very strongly reinforcing patterns that correct themselves back to their pure forms. Instead, they mutate happily into whatever they need to be. I should not be shocked, really. Survival is more important than some artificial arbitrary meaning that spawned the terms. Culture eats meaning as it turns idioms into traditions.

This leads to a concern. How can we collect and curate patterns without a name by which to reference them? How can software design patterns be coaxed into some well-formed collection? Well, they can’t. And shouldn’t.

What we wanted from design patterns was a collection of tools to share to help guide us and bring us together. But that language was stifling. Like object-oriented programming, it held us back because it became the lingua franca for design talk. The design space is limited when teams only use known patterns or, more importantly, whatever the lead architect was aware of or informed about in time.

2

Thankfully, as far as I know, these examples don’t exist.

Battle for life

In his last published book[Battle12], Christopher Alexander showed how powerful the systemic forces are against the pattern movement. The book recounts the events before, during, and after the construction of the Eishin Campus. He began by explaining the existence of two systems—System A, which is the timeless way, and System B.

System A is the way of reviewing the lay of the land and the requirements of the people[Battle12]. This was his system. I suppose he called it System A because, in some way, he felt it was the original way.

System B is the money, ego, and image-driven approach—the system of rules and regulations imposed from above without humanity. System B was and is the predominant way we build things in the modern world.

The book recounts the setbacks and successes when working in System A. The Eishin Campus project was a school construction project to be carried out in Japan by Christopher Alexander’s team. System B did not defeat System A on the Eishin Campus project, but it did leave a mark, reducing its value and strength. System B fights a dirty game. It is a robust survival system. Money drives it, and money has no morals. There is an account of an event[Battle12] that starts as a recommendation, then a bribe, then a death threat to get rid of Christopher Alexander, his team, and their System A approach to the development.

System B protects its interests through whatever means are at its disposal. We know this to be accurate because we hear far too many tales of how large corporations will consider illegal behaviour if the fines are less than the cost of doing the right thing. We know people protect their way of life and are inherently risk averse. System B, being a system of people and money interacting through contracts and requirements, is only beholden to financial risk. Systems theory would invite us to make many strong assumptions about the behaviour of such a system.

Christopher Alexander made some strange observations during the project. One such observation was the ineffectiveness of System B—not inefficiency, but the inability of the system to produce the best possible outcome given what was available. System B lacks trust and only allows those in charge to know things and make decisions. To maintain control, many feign knowledge rather than be caught out as fallible. This leads to waste and ineffective processes.

In one example1, there was a call for stone slabs for the paths between the school buildings. The System B contractor on the project claimed the paving would be too expensive as the required quality would be beyond their budget. When the System A team suggested a rougher quality paving, the System B contractor shot down their idea and claimed absolute knowledge of the requirements and materials. Either on the off-chance or in a fit of diligence, the System A team went to a stone yard to determine if they could find something of a compromise. There, they encountered an expert who suggested their original rough-hewn stone would be good enough for the paths. The expert further reinforced the point, making reference to the yard where they stood. They claimed the lorries had come back and forth over it for many years and added that it was laid with the very same rough-hewn stone. Strong enough for lorries, let alone students walking between classes.

System B is very sure of itself. It knows. It does not think. It can make decisions quickly and offer certainty even where none exists. Confidence can be compelling when your goal is clear and you want to know whether you can afford to complete the project. We will ignore the fact that System B almost always overruns its budget anyway, as it does not allow for late-stage adjustments. But the allure of certainty is a positive feature of the system.

During the project, system B defended itself further by causing setbacks and even making it so the project could never be fully completed[Battle12]. However, the benefit of System A is that even an incomplete project is usable, extensible, and repairable at a later date. The project was a massive success, despite all the damage dealt to it. The campus was known as a wonderful place, even in its incomplete state. Teachers and students alike found peace there like nowhere before. System B did not defeat Christopher Alexander, despite all its attempts.

However, System B did manage to defeat the design pattern movement in software. In building construction, System B was the establishment—the modern building method with contractors, suppliers, bids, and tenders. In software, the establishment did not need to fight back against the return or a natural process. In software, System B is the foundation upon which development practices were built. System B didn’t fight back; it did one better.

Subsumption

Something consistently happens in new technological ventures after the honeymoon phase, where the technology is new and exciting. People become annoyed that the successes aren’t fully repeatable. They want to understand where things go wrong. They want training. They want some structure to control and manage it so future usages of the technology are successful. Some people gain more experience than others and become trainers and teachers. Those with good track records—or at least the ability to convince other people they have good track records—set up consultation services. When there are too few consultants to go around, the consultants start training consultants. And that’s how certification programs begin. This is the System B of software (and any other technology or engineering industry, in my opinion), and these are its patterns of reducing the ability of any new process to fully replace command and control.

But notice something strange here. The first step on this path was when people blamed their failures on lack of training, not the new technology. Not everyone did. Some immediately decided the technology was at fault and continued on their merry way. They needed no more interaction with the technology to know it was not for them.

Command-and-control is the game we play of holding onto power and controlling a workforce to achieve our goals. When the way of working suggests giving up power, command-and-control defends itself. It does this by compensating anyone with compelling arguments that they can provide the same benefits as the new hotness without giving up the power to demand a specific way of working. As per the building construction industry version of System B, financial rewards play a tremendous role in what will drive the outcome.

System B heals itself from attack by paying to convince the workforce they are in System A, such as when we are forced to follow processes tagged with the term ‘agile’, when they are anything but. Even open-source development is not entirely immune to this, as the belief system created by System B is contagious.

Even though I’m not aware of any direct death threats in software development, something similar happened. Some light has been snuffed out. For something to die, it merely needs to have no impact. To be silenced is enough. To be firmly and enduringly misrepresented is enough to kill an idea.

When the way you are measured and evaluated is baked into a contradictory system, the result will be skewed. Grading science papers on use of colour and word choice is useless; just as useless as Christopher Alexander found the classic approach to appraising students of architectural colleges. When the establishment measures your new work through an old and trusted lens, their judgements are often taken as valid, even when inappropriately applied. Refutation of value by a trusted source is enough to stop many from reviewing the content further. In all domains, the patterns movement has been under conceptual attack.

Death is the removal of influence

The Agile movement suffered the same fate. Hundreds of Agile coaches have certificates from expensive courses on how to whip teams into shape and do Agile at scale within a framework. There are Scrum trainers with many years of experience getting development teams to follow the Scrum process to the letter.

Most, if not all, the original authors of the Agile Manifesto[AM01] must barely recognise what it has become. It’s turned from a system that empowered the software delivery process by tightening the feedback gap into a system that regularly sets up three- to six-month deliveries and calls it a success when it fails to deliver half of what it promised to the client.

As I write this, DevOps is starting to suffer the same fate. Many developers turned Eliyahu Goldratt’s thought-provoking information in The Goal[Goal84] into excellent advice, eminently actionable by many organisations. There were books on the subject in different forms, from a handbook to a fictional dramatisation of the process. There are guides on how to measure your progress by converting from using traditional productivity measures to implementing the much more valuable measurement of customer value.

The front cover picture of any DevOps presentation talks about joining the forces of development and operations, but that’s only a visceral symptom of the principles. The core of DevOps is about giving power to those closer to the problems. It’s a process of looking for bottlenecks, understanding them, and finding ways to exploit them. The word exploit would take some explaining, but this is not a book on DevOps, so I shall leave that for you, dear reader, to discover by yourself.

How you fix the bottleneck is a solution you come up with. That is, empowering those at the site of the work to solve the problem by making it visible. DevOps is not a solution; rather, it’s an ongoing process of sensing, analysing and improving things.

But we’re starting to see the meaning of DevOps dissolve into different things. Some DevOps roles are just a different nameplate on someone who knows how to start and stop servers. Some DevOps courses are solely about how to run Kubernetes clusters. There’s no teaching of the process of analysis of existing structure2. It’s as if someone studied several software shops and decided to sell the least common denominator to everyone else.

This is a deep poison. DevOps was aligned with learning how to find the real problems and how to discover or build specific, well-fitted solutions. Hmm, that sounds like someone I know. But it’s been wrangled into a collection of generalised solutions that proved to work several times. Oh no!

Platform Engineering is taking over the part of DevOps relating to creating better working practices, but what does that mean for the value of DevOps as a way to review your situation? Workers won’t conduct the analysis, and we will lose valuable insight into our processes. Gemba Walks3 will have to be undertaken by non-management (the platform engineers), and then things get clumsy and complicated to sign off and authorise, as the cost will warrant management’s attention. It all seems doomed to eventually become consultation again. DevOps will lose influence, and then the next thing will replace what we lack because we tried too hard to do one thing and unbalanced another.

System B appears to prefer to try out every possible silver bullet rather than face up to the idea that management is a job in which you need to pay a lot of attention to how the work is done while also giving up your power to those you have hired as experts in what they do.

1

[Battle12] p. 301.

3

Management cannot improve the process without seeing it happen, so Toyota used to have their managers walk the gemba, the place where the work was done, so that they could experience the actions and pace of the real things happening.

2

There is a training course for Eliyahu Goldratt’s Theory of Constraints called the Jonah program. It teaches you how to analyse a production process in the same way the supporting character in The Goal did for the protagonist there.

Poisoned

Systems are not agents and never act decisively. Rather than cause immediate catastrophic failure, they introduce a slow death to anything threatening their existence. Over the last 20 years, the established software development world’s defence mechanisms poisoned the meaning of design patterns. The reduced efficacy brought less attention, lessening the need for rigour or depth of analysis. With weaker patterns accepted and published, the validity of the design pattern concept degraded further. The masses of patterns beyond those in the GoF book[GoF94] became a swamp, safe to navigate only for the experienced pattern enthusiast. Many fantastic patterns were drowned out in the mire of poorly conceived and sometimes flat-out incorrect anti-patterns or fabrications.

In the early Pattern Languages of Program Design books, there are many examples of patterns that do not pass the test. Surprisingly, the very first pattern in Pattern Languages of Program Design[PLoPD95], Functionality Ala Carte 1, almost passes the criteria for Christopher Alexander’s Madhouse Balcony non-pattern description. It does not make the world better. It kicks complexity over the wall.

The pattern begins by describing the problem of providing ever-increasing simulation fidelity while running on the same hardware. The forces at play are a demand for further features while retaining system responsiveness. The solution presented is one where the developer abstractly reveals the features’ impact (the runtime cost). The end user is then expected to make informed decisions about which features they want by picking them from this menu of items while not needing to be technically savvy.

Suspicious lack of insight

In addition to the problem of adding some complexity to support the configurability, the pattern suffers from having only a single recorded appearance. It does not appear to refer to any other found examples and has no constraints. If it were a pattern, we should expect some reference to how others solved the problem in their context. As it is, the pattern simply suggests the decisions can be given to someone else to worry about. Rather than solve the problem, make it someone else’s problem.

If you have played computer games on both PC and console, you might recall a difference between those two environments concerning fidelity. Console games do not generally make graphics settings the problem of the player. Only with the advent of modern consoles and 4K HDR TV has the option become relatively acceptable to present to the player, and even then, only if the game has a broad enough demographic to garner a genuine benefit related to player preference.

Adapting fidelity in real-time can drive level-of-detail systems and upscaling stages to handle highly complex scenes while not dropping frames. We can do this per frame, which is much better than asking players to balance their diet. We see this superior pattern play out in productivity software where the software reduces fidelity while elements move, such as when adjusting filters and effects in a photo editing application. As I think about it, I realise there’s an element of this pattern in video streaming and autocomplete for web searches; the quality of the result in both cases is lower, but the latency and availability is considerably better.

For anyone working within such a domain, the pattern misses obvious recurring bugs or problems. This suspicious lack of problems leads to their being no wisdom but also no authority. This lack of insight can be one of our metrics for rejecting a pattern. We come up against this lack of conflict again later with the Command pattern in the reference section.

So, Functionality Ala Carte is not a fully formed pattern, and what little is a pattern might be an anti-pattern, as it is a self-serving pattern that pushes the deeper problem to another element or owner.

Provably false

Another pattern from the first PLoPD book2 was in ‘A Generative Development-Process Pattern Language’. Pattern number 42 is Compensate Success. The pattern cites a problem of providing appropriate motivation for success. The context is an organisation with tight schedules and a high-payoff market. So, I would say that’s a typical software development environment. The following section on forces is interesting for how opinionated it is.

Schedule motivations tend to be self-fulfilling: a wide range of schedules may be perceived as equally applicable for a particular task. Schedules are therefore poor motivators in general. Altruism and egoless teams are quaint, Victorian notions. Companies often embark on make-or-break projects; such projects should be managed differently from others. Disparate individual rewards motivate those who receive them, but they may frustrate their peers.

— James Coplien, Pattern Languages of Program Design[PLoPD95], p. 234.

I agree with the first point, about schedules not being good motivators, but less so with the others. And these all still feel like context to me. They set the scene, but what’s the force? If we try to turn this into a pattern, it must resolve something unresolved. Perhaps the force is the desire of an organisation to get the best performance out of the individual developers. I can get behind this as a problem worth attempting to solve.

So, it’s still a problem of motivation, but at least we’re stating it clearly. Now, it’s become evident that the organisation has a different goal from the individual. Otherwise, why would the organisation think it needs to motivate the individual?

The second and last points are highly problematic, but there are references to support the claim, so perhaps I’m wrong? The first is Pay and Organisation Development[Lawler83], which is all about pay and incentive, but when you read the book, it is not about mind-workers but about workers in general, in plants. In fact, on page 21, you see a graph showing the flow of motivation, effort, and performance in little boxes. This is on the opposite page to where the text considers the mental questioning of how much an individual considers performing.

Given a number of alternative levels of behaviour (ten, fifteen, or twenty units of production per hour, for example), an individual will choose the level of performance which has the greatest motivational force associated with it, as indicated by a combination of the relevant expectancies, outcomes, and values.

— Edward E. Lawler, Pay and Organisation Development, p. 20.

Quaint Victorian notion

When was the last time you considered performing? Have you ever considered putting in more or less effort? Based on reward or otherwise? Or, like most software developers, do you work as best you can because the real reward is completing the work? The intrinsic reward for being good at what you do, knowing you positively impact your community, and perhaps gaining a little self-improvement? The only externally provided reward I cherish is the opportunity to do things my way.

The pattern appears to be backing up its claim based on evidence gathered from factory and plant workers. Physical labourers might increase performance when you compensate them with additional pay, but I do not believe software developers are labourers.

The second reference is no better, citing:

Organisations offer rewards; individuals offer performance.

— Ralph H. Killman, Beyond the Quick Fix[Kilmann84], p. 229.

What makes this evidence even more inapplicable is how it even steps on its own toes when later claiming that intrinsic rewards are valuable, but extrinsic (employer-given) rewards seem to cause problems.

For example, if employees’ paychecks are out of line with what they believe they deserve, they become very upset. If employees feel that others are getting more pay for doing less work, they become angry.

— Ralph H. Killman, Beyond the Quick Fix, pp. 233–234.

The work by Daniel Pink on drive in Drive[Drive11] and Herzberg’s two-factor theory confirm and extend the content in this part of the book and show how the pattern is not actually seen in the real world.

We can also see that extrinsic rewards do not make the world more wholesome by the observation that bonuses are natural Cobra Effect triggers. Motivating your high-performance individuals is not a job of cash or celebration but one of understanding what drives them and giving them more of it. Herzberg’s two-factor theory suggests money does not motivate; it only demotivates by absence.

Daniel Pink’s Drive suggests giving people more autonomy and more opportunities to grow their skills. The best way to release someone’s full potential is to give them more chances to meaningfully contribute to the success of their community. What’s more, Ralph H. Killman suggested these same actions!

At this point, I claim the present form of Compensate Success must not be a pattern and might even be an anti-pattern. All the related patterns in the language will now be under scrutiny because they seem suspicious by acquaintance. Code Ownership is an old idea that has had its time but does not work well. Size the Schedule looks too much command-and-control and avoids early feedback. Solo Virtuoso smells like someone just wanted some autonomy. Other patterns similarly seem ill-conceived: Developer Controls Process, Fire Walls, Gatekeeper, Divide and Conquer, Decouple Stages.

Upon review, the pattern language fits a specific environment. I recognise it from first-hand experience. An ego-driven developer without appreciation for the complexity of a large, healthy organisation would feel comfortable with these patterns. I can see why this pattern language might have been published. Many of the reviewers likely felt at ease with the prescriptions. Most software developers likely conform to this description. This person could have been myself within the first decade of my career.

Diluted

With the patterns now diluted to a collection of mixed quality and credibility, the GoF book started to look like a source of consistency. The development world was in the middle of its object-oriented frenzy, with nearly every mainstream language being object-oriented or providing direct support for the paradigm. The solutions found in the GoF book applied to most developments because they were fundamental and paradigm-oriented rather than problem-centric. But with that came the inversion—the solution in search of a problem.

Developers early in their careers were tempted into believing there was a pattern for every concern. They would ask which design pattern they should use to solve their problem. In a world where all the design problems of programming are known or even knowable, this might have been a good practice for beginner programmers. They should not need to solve unsolved problems so early in their career. However, we do not live in such a world. Programming itself has not been around long enough to have most of its problems solved. Looking around at the state of software development, we’re learning about more things we haven’t yet solved3 faster than we’re solving the things we already know about.

Only having 23 famous and credible patterns means there won’t likely be a pattern to solve your problem. The question then posed becomes a mind-narrowing activity. The developer stops thinking about the situation but starts looking to see which of the 23 solutions fits it best. Thus, we end up with misused patterns, which deal a blow either to their credibility or the developer’s capability. Neither of which is desirable.

Design patterns, or at least the way they are portrayed, are also toxic, blocking ingenuity in a different way. The Sapir–Whorf hypothesis4 explains how the structure of the language we use and the idioms we live with help define what kinds of nuances we can communicate. As invention is often the combination of multiple ideas in our minds and making new connections, being unable to think about things as different from each other or from a new perspective becomes stifling. If you know a pattern matching a problem and the language overlaps your use case, you’re likely to assume the pattern is suitable and try to find a way to bend it into your problem space to implement a solution. If your language includes these patterns, there’s inertia against thinking about an alternative pattern that solves the problem more elegantly.

So, as an example, let’s talk about the Singleton pattern. Yes, I like to pick on this pattern, but that’s because it’s due some proper derision. As a reminder, the Singleton pattern is meant to help ensure a class can have no more than one instance and provide a global point of access to that instance. However, it also accidentally ends up doing the following:

  • It locks us to a single class (which hurts testing by making unit tests harder to isolate and mocks harder to inject).
  • It provides a creational behaviour, creating the object just in time,
    • which affects runtime performance because construction is not free, but when it happens, no one knows;
    • thus there is no decided order of construction, making initialisation less well defined, and some orderings may hide bugs; and
    • they are potentially never destroyed, meaning clean-slate tests can’t work, and we can’t be sure we’ve freed all our resources because we never naturally achieve complete shutdown.

Notice how solutions involving a Singleton might only need one or two aspects. Consider the idea of a Singleton used for holding access to a logging service. We want general access to logging, so we expose the LoggingServiceSingleton, which has methods for logging, setting the verbosity, and creating new channels for your messages. But why do we need a Singleton for this design? What are the features required? Global access? Yes. Only one instance? Not sure about that. What if a different system wanted to, for security reasons, have its own logger encrypted at the source? And what about the lifetime of the logger? Do we really want it to exist when it’s first used, or do we want to control when it is created so that we don’t start logging before we know we should or where we should be logging to? It’s beginning to feel like we want something different.

Another implementation of Singleton can be when a repository is responsible for an object, but there’s only one instance in the repository. This is a Singleton as there’s a global way to access it, and we have only one instance. When you think about it, how are any of the long-lived tables in a database different from a Singleton? Now, what about null? Isn’t that also a Singleton? We have global access to it, and there is only one null object. But how often have you heard anyone refer to null as a Singleton? How many times has anyone referred to using an SQL query to get at a Singleton? Language encourages thinking, but names can stifle it.

Okay, enough of the Singleton pattern. What other patterns hinder alternative thinking about solutions?

Mediator stands in for having strong idioms of messaging or a message bus. Composite gets in the way of thinking about structure as being external to the objects within it, so it promotes the idea of structure as something intrusive and renders membership in multiple structures unnatural. Interpreter begins to address this, but so few people understand it, Composite then tends to become a lock-in model. Strategy stops you from thinking about currying and lambdas. Decorator starts to address currying but then breaks the model by suggesting wrapping more than one method at a time. And Flyweight, if you ever even think about it, stops you thinking about better data models than object-oriented.

One of my favourites to point out would be Command and Memento for stateful structure manipulation and undo operations. It stops you from thinking about immutable state solutions, which can lead to interesting new opportunities. I cannot claim to have come up with the excellent solutions presented by Sean Parent, who showed5 how they used immutable state to provide history and undo procedures for Adobe® Photoshop® to much applause in conferences where he talked about his concepts of Better Code. However, even though the technique he presented is powerful, it’s only one more different way from the design pattern technique of using Memento to store the state of objects before applying Command. There are other ways to write undo–redo systems, such as:

  • Save the state after every change to a different location. It’s dumb but works, so it should not be ignored.
  • Make your commands fully reversible. That is, they are never destructive. This isn’t always possible (for example, when creating, then referring to the created, it can get complicated to undo then redo over sequences), but when it is, it’s a choice you could make.

But here’s the problem: when people ask about a design pattern for undo, most are told how to use Command and Memento. However, the undo pattern doesn’t rely on those patterns at all. The undo pattern is simply about trusting that your actions are steps that can be undone and redone without fear of losing valuable data. There’s nothing inherently command-based about the steps. Nothing requires you to remember the way to extract some state from anywhere. The pattern merely asks you to ensure the current state of a document is recoverable after you take a step. In addition, if you undo the step, you have an opportunity to retake the same action in the same way. We want to redo the act with the same preferences, brush strokes, and key presses.

Consider also that when you only store the user’s actions, you begin to see how you can track commands in a way that they become macros. Vim users are well aware of this way of building up operations. Each action is usually an elementary key press or sequence of key presses building up a much larger expressive language.

Missing the bigger picture

But worse than this, addressing the problem of an undo pattern by suggesting Command and Memento skips over aspects of the real problem. Missing from the combined pattern solution is how an undo pattern should include whatever matters to the user’s mental model and nothing else. What’s also missing is the modality of the undo operation.

Ignoring the aspect of the user ignores which steps are considered steps and which are considered fleeting actions that do not change the document in a meaningful way. In such a system, macros may expand into actions, and each action would undo in turn, even though the user only took one action.

Consider modality, such as when an undo system keeps track of all commands, including selection and tab switching. In that system, it might become impossible to undo changes in one document while leaving modifications in a second unaffected. Imagine opening your favourite text editor and finding the undo buffer applied to all open documents, not just the one currently in focus. So, for an undo pattern, you need the undoing context, and that’s not covered at all by the pairing of Command and Memento.

If you hadn’t noticed, the latter anti-pattern is present in the Windows file system undo mechanism.

  1. Open two explorer windows.
  2. In the first, make a folder and name it.
  3. In the other, open a different folder and create a file there.
  4. In the first window now, undo.

Notice how it destroys the file rather than revert the folder rename. Not what you expected?

Succession

Now, let’s talk about Chain of Responsibility. This pattern was superseded by how most web server routing software handles requests. The request–response pattern extended Chain of Responsibility considerably, with the intermediate state travelling up and down the chain. This pattern has been repeated in several places, but I didn’t find any reference to the Chain of Responsibility pattern in any documentation about it. I also did not see a reference to it as a pattern in books contemporary6 with the design.

It’s as if the pattern of request–response arrived, was assumed natural, and ignored. It’s a useful pattern and should be understood so that it can be used outside the scope of web services. It’s a Chain of Responsibility with not only a hierarchical namespace as the key to who should be responsible, but also intermediary states affecting the message responded to. Outside of web requests, the routing aspect of the pattern can be seen in some messaging libraries where subscribers can subscribe to path patterns.

A very close relative of Chain of Responsibility is the bubbling and capturing event handling in web browsers. Bubbling maps almost perfectly to the pattern but adds an external manager responsible for the propagation. Capturing offers the opposite, where an outer object captures an event before it is passed down the hierarchy to its children. But again, I have not seen reference to the GoF pattern in any JavaScript or web development literature.

Almost true

The poison here is often not how the patterns themselves are harmful but how they almost fit so many problems or hide better patterns behind a famous almost-solution. Their solutions are not only applied when they don’t fit but also when they mostly fit but something else would have been better, more flexible, and less complex and could have opened up more and different opportunities.

Another poison comes from how some incorrect patterns have remained prominent and bring their world view with them. A good story is always true, but only in the world it brings with it. When we see patterns such as Compensate Success, we see examples of a fictional world where people work harder and do better when you pay them more. And we believe in that world because the story is good and makes sense. But repeatedly stating the same hopeful statements doesn’t make them true—it only makes them believable.

The Gang of Four wrote that their book offered a way to improve our ability to talk about the larger elements of our software by introducing terms to use as a language. They were partially correct. They introduced a language to communicate common ideas well and efficiently. But as with all things to do with efficiency, there was a negative impact on effectiveness. When we use the language they provide, we walk a narrow path. When we walk the path often enough, we start to believe it is the only valid route.

1

Yes, it’s spelled incorrectly. But that is how it’s published in the book.

2

I also found this pattern language in the Patterns Handbook[PH98]. It was reproduced in the 2004 book, Organizational Patterns of Agile Software Development[OPoASD04] with two stars of confidence (the Alexandrian notation for extreme confidence the pattern is always present to some extent in all wholesome solutions for the problem), and referenced again in 2019 in A Scrum Book[AScrumBook19].

3

As we develop new techniques, we see new problems we need to solve, such as screen-space reflection artefacts, handling audio in VR, developing an element of self-doubt in AI language models, and finding a non-destructive financing model for search engines and social media.

4

Not one publication, but a series of hypotheses over time across Franz Boas, Edward Sapir, and Benjamin Lee Whorf, to a principle of linguistic relativity. The way words are constructed out of pieces and the construction rules inhibit or afford the ways of perceiving, thinking, using, and extending both words and their objects. This subject is far too large for a footnote.

5

Better Code: Runtime Polymorphism - Sean Parent, from NDC {London} January 2017 https://www.youtube.com/watch?v=QGcVXgEVMJg

6

I did not spot it when reading Enterprise Integration Patterns[EIP04], but I could have missed it.

Unfortunate idealism

Christopher Alexander’s building process, ‘the timeless way’, relies on a specific set of motivations for building. It’s not clear whether Christopher Alexander was aware of this requirement in the early days of his career. On a number of occasions he butted up against others without seemingly understanding their responses to his much more wholesome designs. As he developed his theory, his apparent understanding of their motivations grew, and he fought more actively against them. However, these motivations cannot be ignored or easily defeated within our current cultural situation.

The timeless way derives from practical, purposeful creation to satisfy a need. In some cases, that need can be a very well-defined sense of creation to inspire awe or depict dominance. In the times before the rentier classes became the controlling majority, the power to construct such edifices as cathedrals or castles was limited. They were traditionally constructed because familiarity with what they were and what they represented provided a significant part of their value. As the human population increased and the organisational structure adjusted towards personal ownership, the number of people able to command an army of builders increased. But, the reasons for their creation slipped away from ideological or physical security toward the vapid extreme of building a personal monument or leaving a signature of their power. Today’s castles are not armour or deterrents; they are elaborate garments.

The timeless way allowed a powerful entity to demand the best work of those aligned with its beliefs, to create without requiring detailed plans or a micro-managed command-and-control approach. Those who commissioned the building sought a group of like-minded souls. They would develop it together and agreed it was the right thing to create. This alone is a massive difference to modern construction. The difference is that now workers construct, and designers design. Workers are no longer invested in the outcome or morally aligned with the development—the natural result of working primarily for money, not pride, joy, or personal needs. You can see the difference when comparing these projects against self-builds or home improvement work.

Christopher Alexander needed people to be honest and not mean. Those without these qualities, and unwilling to try to understand, ruin the system. In the past, it was easier because there was little value in being dishonest about your work. Your reputation was crucial. Nowadays, clients spend less time deliberating over a builder’s reputation, regularly ranking purely based on how low they bid on a contract. A builder who can cut costs on a bid will likely win it. If they can cut costs on materials and labour while building without introducing the potential for litigation at a later date (or even avoid it in the short term), then there is a strong incentive to do so. All this stems from how we reward and punish. There can still be pride in work, but people are often punished for doing the right thing and rewarded for only doing what is necessary to get the job done.

You see this effect everywhere, not just with physical construction. It’s not only engineers who are incentivised to cut costs and move on to the next contract; many other sectors are rewarded that way too. You only have to consider the food and catering industry to see a brilliant example of the power of profit-driven goals by how we have come to live in a contradiction where cheap fast food and expensive coffee can live side by side and target the same consumer.

When we fight against these negative feedback mechanisms, we can do amazing things. I’ve heard the success of Heathrow’s Terminal 5 was down to such an instance where the project was led well, and most importantly, the contracts were written and updated when necessary to better reward those who worked as a team. This eliminated the temptation to only do what was required. The huge project had some failures, but the construction was completed on time—an astounding feat given how inevitable overruns seem to be with large infrastructure projects.

In short, the timeless way of building only works when you want the best for the people who will use the building. It’s successful when the developer is interested in the outcome for its inhabitants. It works for projects to make your own home or when an honourable person of great integrity sits in a position of power during construction. But when there is a disconnect, the Alexandrian way of building is constantly stressed and under attack by the forces of fiscal or ego motivation.

Unresolved forces.

Design patterns have unresolved forces. I’m no longer talking about individual patterns but the whole concept. There are unresolved forces of wanting to share them with a larger audience, and those forces are currently deadlocked against the forces of ego and image-driven processes. Getting the larger picture of design patterns out to everyone only benefits everyone.

The ego-driven effect on design patterns left us with numerous people proving they understood them but scant access to that knowledge. We must resolve the force of freeing knowledge and let the patterns out. Give people what they need to recognise them and understand enough of them to appreciate what they offer. But the force of fame for the individual and making money from it stands in the way.

There’s an unresolved desire to share design patterns with the rest of the community. The ideals of those who work in software architecture are plain to see. Academics or professionals who like to work in software see value in it. They want people to know what’s available and use the knowledge that has been gathered over the years. Still, there remains a constant need to gain recognition or money from it, which causes problems, such as being unable to get hold of worthwhile books.

The same force of making money from a project, or the force of ego or mark-making, is at the centre of some of the recognisably bad architecture exhibited in The Nature of Order, Book 1[NoO1-01] books. The buildings that stand out as inhuman, or at least not built for human living, are those where the architect’s ego was more important than the life of the building’s occupier or the ability to sell the design was more important than the inhabitants’ comfort.

This is why I say the design pattern movement has unresolved forces, hence the book’s title. When reading Christopher Alexander’s works, we note that ‘to balance or resolve conflicts’; often appears. From a systems point of view, it means finding some potential difference, like a charged battery or, better yet, like static electric build-up. It means we recognise where the system is stressed and where to begin to relieve it. You can think of it as a release of tension.

Sometimes, resolving forces can mean tidying up; you remove the clutter to think about the important stuff. But it can also be recognising discord and how fixing it is the most urgent act.

The forces are the as-yet unsatisfied demands. The forces are what will tear a thing apart if left unchecked. They will make the whole incomplete. They are the gaps and the lack. They are also the workarounds that make life exhausting. They are the inefficiencies, the security holes, and anything keeping you up at night, your worries and unfinished business. They are the source of anxiety and why people can’t settle or never feel comfortable or confident that things are fine.

To resolve the design patterns’ forces, we must recognise them. First, there is the force of awareness of what design patterns are. They are for everyone, not just object-oriented programmers, even though the most famous book in computer science speaks specifically about them. Design patterns are better when formed as languages solving a greater problem. Second, we must recognise the force of availability. We must find a way to make design patterns usable for everyone.

We live in a dark void where there are no clear and praised examples of good software

What is your go-to example of good code? Do you think of the C++ standard template library? Do you think about the virtually bug-free code in TeX, the typesetting system by Donald Knuth? Do you think about the Apollo 11 code, that is, the code that took astronauts to the moon?

When we finally come across examples of good software, we judge them harshly. We denigrate some for not being clever enough. Others we judge as not being realistic situations, and we classify them as toy examples. We discard some as merely simple, as if simplicity wasn’t something we strive for every day. We forget all the effort behind our best successes in the grind to reduce complexity. We forget to ask how it got to be so simple. We fear complexity and yet wear it as a badge of honour.

The same happened in architecture when modern architects started to look down on constructions that were objectively good for the occupants. They began to care more about the impression they gave and less about how well they served those who were to dwell inside. We slowly pushed aside good examples of houses and offices in favour of novelty and expression.

The remaining bastion of purpose-validated construction seems to be agricultural and industrial buildings. We make them safe and productive, aiming for minimal upfront investment and low maintenance costs. They are not traditionally beautiful due to their need to produce goods being prioritised over the comfort of their workers. Still, even the most mundane industrial plant has a sense of honesty and wholeness that is missing from so many of modern architecture’s multifarious creations.

Many years from now, I hope future generations of programmers can look back on old code and pick the good from the bad. I want them to have the wisdom to tell what makes a good codebase good. I hope they discern better than those who disdain the simple and uncomplicated solutions that offer instant readability over clever layers of flexibility. The situation we presently find ourselves in feels like the school of modern architecture, where we expect enterprise-level solutions at all levels. Otherwise, they are looked upon with suspicion.

One day, I hope those building up their complexity will see how the rule of scale shows they were building too large a language in one layer. I hope they find a way to extract themselves from the treadmill of fighting back against inherent complexity caused by putting all the different levels of abstraction in the same layer of their project.

Until that day, we are limited to writing good code despite these abnormal ideals. We can push back and claim we’re not smart enough to review their code to avoid the creeping complexity and unnecessary future-proofing. Or just point people to TeX.

How Do We Fix Patterns?

We could first ask why we want a successful pattern movement. What’s wrong with how we’re doing things now? For me, the answer lies in the potential patterns had and still have. The ability to act with experience-free wisdom. The time saved in developing inferior solutions from first principles to problems that have occurred many times before. The benefit of taming complexity using the unfolding process rather than the accretion of disparate parts. And the formalisation of a new way of capturing reusable knowledge.

When I began writing this book, I thought the movement might resemble Stack Overflow—a place of questions about problems with solutions. I also thought it could be another incarnation of the Portland Pattern Repository—a specialised Wikipedia. But in the end, these knowledge stores are suited to different types of information.

Knowledge distribution

The Divio documentation system1 is a superb way to document important information about your product. The system organises your documentation around two axes. One is the practical–theoretical axis; the other is the studying–working axis. Tutorials and explanations are study-related, whereas how-to guides and reference manuals are for when you are working. Tutorials and how-to guides are practical, whereas reference manuals and explanations are more abstract or theoretical. When I studied this system and understood these quadrants, I realised I could classify them in another way. We can organise the quadrants by awareness and ignorance of different parts of the problem and solutions.

  • Reference material is for when you know the solution but not the details.
  • How-to guides are for when you know the problem but need help finding a solution.
  • Tutorials are for when you know you have a problem but need help understanding the context and refining your problem.
  • Explanations are for when you might not even know you have a problem or want to understand what other kinds of problems you will have with any system attempting to solve something similar.

This last one sings of being a pattern language to me. Tutorials are example solutions in the pattern language, taking on a few patterns at a time. The how-to guides are supporting material for emergent sub-problems of the space, and reference material is supporting information for the solving process at the detail level.

Given this different understanding of documentation, Stack Overflow is a place to go when you know you have a problem. You’re not problem ignorant; quite the opposite. It’s a place for how-to guides. It can be useful when you don’t know how to couch your question well enough for a direct search result to the correct reference material or when the reference material is too difficult to digest.

Wikipedia is for when you know what information you are lacking. You know what solution you have in mind, and you want reference material. This is perhaps why we drifted away from the WikiWikiWeb for patterns and ended up with books and conferences keeping it alive in its current form. These forms provide enough support to stop patterns from dissolving completely, but could a new pattern movement emerge from them? How do books and conferences fit into the problem-ignorance space?

Passive learning

Books can work when they are recommended by people who know about the problem domain, even if they aren’t well-versed in the patterns. Recommendations are a good way for those unaware of their problems to access the wisdom they need.

Because books are collections of related solutions to a whole suite of problems for a domain, they can become fountains of knowledge. Someone will ask, ‘What’s a good book on how to build a shop app for my business?’ and there will be a book on it. But it will also include other wisdom revealing unforeseen problems for the inquirer, possibly more than the recommender knew. Even if it’s not a patterns book, it will probably be a tutorial joined with a how-to guide. It will contain a sequence of steps to get the reader going. So, books work for people who are unaware they have problems, so long as there’s someone to recommend them or the domain is discoverable.

Conferences are different. They work for people who know they have problems they are unaware of. The humble. Those who are accepting of their ignorance. They work reasonably well, but only as long as people attend them. Attendees learn new patterns for problems to which they were formerly oblivious. But why attend? Can they afford to attend? The PLoP23 conference was nearly $1,000 to attend. The EuroPLop conference cost over €1,000. Perhaps you think this is reasonable, but maybe smaller, local events could help make this more affordable.

For those who can’t attend, there are often videos available online after the event. However, conference videos lack the thrust of being there, so the less appealing tracks remain entirely unattended. I don’t think we can shape a new pattern movement around these.

I don’t believe a wiki will work for a future movement. The WikiWikiWeb is an artefact of an earlier time. Because patterns are about problems you might not know you have, there is no starting place from within the wiki to arrive at your solution. Signposts don’t work when you don’t know where you’re going. Why do you look up a pattern? It’s because you already know there’s a pattern to help solve a problem you’re aware of. Or you need to remember how a pattern is meant to play out and want to refresh your memory. Neither of those reasons find footing in new minds encountering new problems.

Whatever the movement looks like, it must proactively engage with individuals who are not yet aware they have a problem. So, it must be problem-centred. It could be anecdotal about situations recognised and wisdom gained. The languages and patterns may be circumstances you find yourself in and the things that guided the final resolution. Stories, paths, wisdom, and talks.

If it is successful, the next pattern language movement will likely use patterns of audience engagement to raise awareness. It will look like it’s invading an existing presentation structure, but it will be silent and go unnoticed. It will probably not use the name ‘design patterns’.

It will be about pattern languages, making it about a domain of interest rather than a tool people want to use. It will be about the buildings, not the bricks. It will be about specific classes of applications or systems, not programming languages or paradigms. It will be about making better potters and programmers, not pots and programs.

Whatever it is, it should not be another book.

Updates

For those who want an up-to-date wiki, we can fix this by building a freely accessible library of pattern languages. Not just patterns, and not just the basic ones we know of, but also the hard-to-find languages and patterns and their history. We should link to sources where we can. We need to show how pattern languages have changed over time. Trends may emerge, but history is important because those aware of old patterns must learn whether they were superseded.

If a pattern exists somewhere, then it should be possible for anyone to find and read it. We discover patterns, so, by definition, their form and content cannot be patented. Reiterating what they are, how to use them, their contexts, and their forces cannot infringe on anyone’s rights, as they did not invent them. Only the specific text of a description can be copyrighted.

If a library is built, we already know the overall structure. Languages are structured as a web—a list of patterns with links between them. The whole library only needs to have a way of storing a new language when one comes to light.

But the library will not be the new movement. It won’t fix patterns, but it will stop them from being so inaccessible. It might help prevent people from trying to name them by removing the ego from the discovery and naming process. And it could inspire the creation of new languages. However, passive knowledge does not drive change.

More negative examples

A new library may help refine the form of patterns in the face of the much-needed conversion to a problem-centric record of wisdom. But we must fight the problem of abundance. We can do this by agreeing on a formal way to decide whether a pattern is new and valuable and only publish as patterns those that pass the test. However, we should still record every pattern submitted. A library is not only an excellent place to store the truth but also the right place to keep negative examples. We can use them to help others understand when they have veered off course.

Diagrams

Physical building patterns used diagrams because they related to the geometry of construction, so we deemed them necessary for the software design patterns. But it was a mistake, a misunderstanding of what diagrams were for in A Pattern Language[APL77]. Maybe no one deeply understood the reason for diagrams and so couldn’t find an equivalent representation for code because they didn’t know what they were looking for. This is almost certainly true, as Christopher Alexander and his team had yet to discover the limitations of the diagramming provided in A Pattern Language.[APL77].

The diagrams in A Pattern Language.[APL77] most often showed the final form. Even in places where the process was vital, there was no diagram of the transformation. If a pattern is meant to be a process and a thing, these diagrams only show half the picture.

What the diagrams lacked were negative examples. They would have reduced the oddness of some designs using the architectural patterns of house building. We should not replicate this mistake where the process is not shown and only the final result is revealed. We must protect readers from misunderstanding by presenting negative cases, otherwise they will end up learning to dance like children copying adults.

Proof by contradiction

Zendo is a board game involving plastic pyramids and sleuthing. It can show you the power of negative examples. In the game, you try to deduce a rule; in the same way as a pattern reader, you try to internalise the presented wisdom.

You start with the lead player showing you a pair of examples related to their selected rule. One is a positive example, the other a negative. The rule can be entirely made up and can include things such as:

  • There must be two red pieces
  • There must be a pyramid on its side
  • All pieces must be on the table
  • A piece must be touching another piece

They can have negative rules such as:

  • There must not be any blue pieces
  • No wedge shapes allowed
  • No small on top of large shapes
  • Pieces must not touch

The game involves players constructing their own examples and asking the lead player whether their creation matches the rules. It doesn’t take many turns for people to figure out even the most complicated rules, but the structures built to test negatively provide the best clues to the rules. The aim of the game, after all, isn’t to create designs that follow the rules but instead understand the rules themselves.

This game shares a lot with Cluedo™ when playing the Zendo variant using cards, but the variant where the lead player merely makes up the rule increases the possibility beyond a simple game of deduction. You must create experiments to test your hypotheses. The deeper, more free-form game shares the important aspect of the uncertainty of the boundaries of the example solutions from design patterns, descriptions, and diagrams. And both games share this trait of the unknown with the game called Petals Around the Rose.

Knowing what a street should look like doesn’t help you as much as knowing what a street should not look like. Knowing about counterexamples of building design is very helpful. Just seeing a thick wooden step to stand on, you might believe the step needs to be made of wood but not realise the thickness matters. You might not be aware of the importance of the depth and rise of the steps either. Counterexamples highlight things you need to worry about. Design patterns, being wisdom, require counterexamples to present this most crucial aspect. This negative aspect is missing from almost every pattern ever published.

The library must include negative examples to help guide to a better set of languages and patterns. So, it must include the original Singleton pattern.

About change and avoidance

Geometry and diagrams are incredibly helpful in building and architecture. Pictures show form, and we have an innate ability to see whether a design looks right from at least one perspective. This is not just because we have seen so many examples before but because we recognise good shapes inherently. The biological sense of beauty.

However, diagrams of the real world are only helpful because they simulate the world we inhabit. The diagrams of code don’t provide the same level of recognisable goodness of fit because they simulate an abstract world.

A diagram of the code’s geometry must represent the code’s goodness. Diagrams can give us a sense of awareness of failure points and unhealthy situations, but most don’t show us the relevance of the parts in any appreciable way. One way would be to diagram modules or classes and colour their API so you can see where they are used. Another would be to draw every usage as a line and look for messes or places that offer clean cuts.

Interaction lines

With this diagram, you can see the interactions between the elements and where aspects of the API currently overlap. Where there is overlap, we see complexity. Where there is not, there is the opportunity for a further split. We can see where components can be easily cleaved from the collection and those that rely on almost everything.

But this only captures the system as it is. This diagram, even if developed further, does not show how a system changes. Therefore, it only reveals the state before or after a step, not the complete dynamic picture.

Diagramming code

The interpreter pattern is complicated, and the lack of diagrams showing the true intent of it makes it hard to digest on even a third reading. But there are also good examples in the GoF book[GoF94]. If you look at the composite pattern’s object diagram, you can clearly see the intention to allow the creation of hierarchical structures. The UML1 diagram showing the recursive link is there too, but it does not clarify the intent.

UML diagrams were good, but trying to fit all programming paradigms into them is a fool’s errand. They were a natural choice for the GoF book, but people outside of object-oriented design don’t understand them, and they add friction. They also imply object modelling, which adds to the concern of all famous design patterns being object-oriented. From this evidence, many infer that object-oriented design is the only way to design software.

The UML diagrams most often seen are class diagrams, but most developers need visualised usage and instances. Seeing how things are meant to look when realised can allow for greater insights. One diagram shows how things will be at runtime, the other only shows a picture mirroring the code. In my opinion, a diagram should only show what you cannot represent more clearly in some other way.

If you have a copy of the book, look again at the presentation of the Composite. The first diagram is a UML class diagram. In it, we see the code relationships. This might be useful for developers wanting to understand how to implement the solution, but it’s not so good at exhorting the value of the pattern. It’s a diagram of the final code.

The second diagram, however, is a UML object diagram. There are duplicated elements because, in the working case, there will be multiple objects of the same class. This diagram helps us understand what will be happening at runtime. This diagram imparts the value of the pattern even if it does not show how the pattern can be implemented. It also immediately presents a counterexample to one misinterpretation. We have a concrete example showing it’s okay for a Picture object to contain another Picture object, whereas the class diagram only implicitly allows it. In other words, this type of diagram shows something that would otherwise be ambiguous.

Note the difference between the way the diagrams map neatly to the world of the developer and the world of the user. One is the state of the source code, the other is the state of the running program.

I was delighted to see that Pattern-Oriented Software Architecture Volume 5[POSA5-07] omitted UML diagrams and even mentioned how they might not have been the best choice when discussing how to define design patterns. Because even an object diagram is an image of a static final form, it suffers the same problems as the diagrams from A Pattern Language. But also, the UML does not benefit from the centuries of examples and counterexamples we find in our lived environment. If the GoF book were to be better understood, they would have needed to fill in for the deficient history with many counterexamples for their patterns.

Draw the rest of the owl

Diagrams for code patterns must be two things. First, they need to be examples of the right thing. Second, they have to be concrete examples of the process being applied to a context. Further diagrams must show where and why the pattern should not be used. We will need a lot of examples for this if we take each aspect in turn. But it’s worth thinking again about how we traditionally transfer wisdom between people and between generations. We pass on our knowledge through stories.

Anecdotes stick. The mystery of the moment and the excitement of learning the solution to the sticky situation is always a thrill. For code, the roller-coaster ride of problem and error to technical debt to the final sweet and robust solution should be no less enticing to the invested developer. But we need more woe. More setbacks. More failures to learn from. Stories of heroes always succeeding are boring. There must be a challenge.

Telling stories about patterns will work. Stories are always about change and adaptation to newly gained boons such as knowledge or power. Design patterns are definitely stories about freshly acquired knowledge, but they need the middle act where the antagonist throws their army of ‘gotchas’ at the developer, trying to wear them down with new situations they need to handle. We need conflict in the pattern story.

The existing design patterns are not stories. There is no conflict, and we haven’t structured them as unfolding sequences. They are more like news items than tales with meaning.

Sometimes the thing we want to show has a visual element, such as in UI patterns. In those cases, imagery and geometry work amazingly well. But we should keep the story aspect. We even have a pre-packaged solution for telling stories in image form. We can make comics for UX patterns. We might not be able to make a comic for the Strategy pattern, but we can for the mile-wide-button UX pattern. Choose the correct type of diagram for your domain. For code, we need structure diagrams and before and after code snippets.

Pattern diagrams currently show the final form, not even the process in many places, but we need the process to understand how codebases must change. A codebase is not built in one shot, so any technique or pattern should show the process of introducing the element as a change in a context. We need to show our intentional avoidance of certain forms through negative examples. When teaching wisdom, we must show what is good, but the bad is much more compelling. Pitfalls will always be more valuable than tricks, and ‘gotchas’ are more useful than advice on what and how to perform.

1

The unified modelling language is an attempt to create a visual language with which to model object-oriented code structure and dynamics. It was used throughout the GoF book to present the structures of design patterns.

Resolving the discoverability problem

For the more idiomatic patterns, those of programming language or paradigm, discovery cannot be fixed by organising by problem. None of these patterns are strongly aligned with an overarching theme or domain of conflict. People don’t often encounter them when they have a need—an applicable problem. For these kinds of not-quite-patterns, we need a different solution to make them visible.

A preferred approach would be to automate idiomatic adherence. The Clang Static Analyser and clang-tidy tools, Rust’s clippy, Python’s Pylint, pyflakes, and ESLint for JavaScript are all examples of automated tools to help identify where something doesn’t seem right. They capture errors or at least identify where meaning is ambiguous. Other tools such as style enforcers, Rust’s fmt, Python’s black, and Clang’s clang-format all provide a way to make code readable for all by enforcing a standard way to write code.

An idiom section of the pattern library provides a place to discover why these tools raise a proverbial eyebrow. However, we still need to improve the usability of the library itself.

Externalising a brain

We act when we sense. We sense when we can discern. We discern when we are attuned to the typical parameters of the elements with which we interact. Only then can we experience change and contrast. But if we don’t know or have not experienced, we will not have the presence of mind to think about some aspects during our interactions. We won’t have the ready knowledge to act on input, because without sensing, there is no input.

Our goal cannot be to teach everyone every pattern. This knowledge and wisdom must be kept outside the minds of pattern users but held in such a way that it is accessible just in time. But do we have any examples where this has been possible? Yes! We have many.

A dictionary is the externalisation of a brain filled with answers to the question of what a word means. A thesaurus is the externalisation of knowing the categories of words and the selection of expressions you can regurgitate in pursuit of persuasion. Indeed, even how-to guides are an externalisation of competence. Recipe books and repair manuals alike store information irrelevant outside of specific moments of need. Even an organisational chart is an externalisation of intra-organisational networks, so you know everyone you might work with.

A library

We must reference patterns in one of these different ways. I prefer a Thesaurus-style approach with a section for discovering what categories the pattern might be in and a second section for learning what patterns are in each category.

Categories may be related forces or common contexts. I do not have a solution for the challenge of structuring them, so even this step is tricky. Perhaps you’d hunt for contexts or forces by pattern name and then hunt for pattern names by context or forces. That is the way Roget’s Thesaurus is constructed; different dictionaries built into the same space, as an inversion of each other.

The other benefit of even attempting this would be the possible discovery of truer categories of patterns. These categories could be the starting point for discovering the fundamental properties of software patterns.

If we resolve the discovery issue, it will be worth building a repository of all patterns. At present, because they are most often searched for by name, just listing them all makes no sense. For proof of this, the Patterns Almanac[Almanac00] makes it clear that a simple book of patterns is not a valuable desk reference. No, an efficacious patterns repository must have an extremely clear contextual lookup system. It must have obvious categories that you can peruse for solutions. You must not need to know the name to find a pattern for your problem.

An example

In this section, I attempt to create a pattern story, showing how it might look when following some of my advice. Diagrams show both a before and after state. Look at how it defines the problem as an anecdotal conflict and suggests a path towards a solution. It’s not perfect, but hopefully, this provides a concrete alternative to some of the more theoretical work covered thus far.

The pattern is part Specification (from Domain Driven Design[DDD04]) and part Interpreter from the GoF book[GoF94]. The problem faced is strongly inspired by the section ‘Replace Implicit Language with Interpreter’ from chapter 8 of Refactoring to Patterns[RtP04], though the initial problem, the programming language, the specifics and solution are all different, the refactoring journey is quite similar.

CAVEAT: This next part is probably not the best way to explain a pattern, but this story-like flow may be more comprehensible because it contains elements of the pain of the problem and shows the steps.

1. The problem – My API has a lot of data-coupling getter methods.

When your software grows, sometimes it gains some non-complex but large APIs. These are shallow, simple APIs with many methods, each used by only one or two other objects. This usually happens when one subsystem has grown to solve the problems of many other systems and has taken responsibility for them, even though they are not best suited to solve all the problems.

When is such a system not best suited? When the system knows less about the domain of the solution than the caller.

Software in this state tends to have many methods with overlapping names. Many small specific public methods massively outnumber the private implementation. Many of the individual methods of the public API will not be used by more than one external entity. Some methods require otherwise unrelated data to be managed by the module.

Each external entity has a thing they care about, their domain, and the central module provides a specialised port to connect to. In addition, the central object often owns the data, but its primary responsibility is handling requests, not data transformation itself; it lacks domain knowledge.

We’ll consider an application to help budding authors find information on books they have read, or should have read, and help them create good bibliographies and notes.

1.1 My problem

Setup Diagram

My application, Library, was an opaque object with lots of methods. It relied on two other classes, Book and Note. The support classes were trivial data objects.

In class Book
  def GetAuthors(self):
    return self.author

  def GetTitle(self):
    return self.title

  def GetPublicationDate(self):
    return self.date

One method on the main Library object was to fetch the notes for a book, as I thought storing the notes directly with the bibliography information was a bad idea.

In class Library
  def GetNotes(self, book):
    if isinstance(book, Book):  # if a book, turn into book_id
      book = book.reference_id
    return [note
        for note in self.notes
        if note.reference_id == book]

There was a trivial method for getting the list of all the books.

In class Library
  def GetBookList(self):
    return self.books

However, the whole list was quite large, so it was not easy to work with directly. Instead of using that method alone, I wrote new methods to fetch by different criteria.

In class Library
  def GetBooksByAuthor(self, author_name):
    return [book for book in self.books
            if author_name in book.GetAuthors()]

  def GetBooksInDateRange(self, start_date, end_date):
    return [book for book in self.books
            if start_date <= book.GetPublicationDate() <= end_date]

  def GetBooksMatchingTitle(self, match):
    return [book for book in self.books
            if match in book.GetTitle()]

And when I say the book list was very large, I mean it. Even filtering down this far was not enough in some cases. To help, I added some even more specific fetching functions.

In class Library
  def GetBooksMatchingSubjectWithNotes(self, match):
    return [book for book in self.books
            if match.lower() in book.GetTitle().lower()
            and len(self.GetNotes(book))]

  def GetBooksByAuthorInDateRange(self, author_name, start_date, end_date):
    return [book for book in self.books
            if author_name in book.GetAuthors()
            and start_date < book.GetPublicationDate() < end_date]

  def GetBooksMatchingSubjectButNotByAuthor(self, subject, author_name):
    return [book for book in self.books
            if author_name not in book.GetAuthors()
            and subject.lower() in book.GetTitle().lower()]

  def GetBooksInDecadeSortedByRef(self, decade_start):
    return sorted([book for book in self.books
                   if decade_start <= book.GetPublicationDate()
                   and book.GetPublicationDate() < decade_start + 10],
                  key=lambda x: x.reference_id)

After a while, I realised that some duplicated code had led to some bugs, and adding new functions wasn’t getting easier to get right. I would copy-paste the closest method and make some changes. This is not the cleanest of coding practices and is obviously prone to copy-paste errors.

2. The forces – I want more but need fewer.

It all came to a head when I hit two opposing forces:

  1. I wanted to add even more queries, but it seemed silly to keep adding them this way.
  2. I needed to remove all the queries to do with notes.

I needed to add a method that would select the books referenced in my new work, Programming Design-Patterns for Job Security. I wanted to add something like GetBooksWithNotesIncludedInWork, but that would couple the bibliography software to my notes objects even more.

I thought I could make something that grabbed all the notes, checked they were included in my new book, and then use that filtered list in a new function called GetBooksWithNotesInThisList(self, note_list). That seemed like an almost workable but awful plan.

But then I hit a real problem. Someone I worked with wanted to use part of my software. They needed a bunch of different queries for their books, such as GetBooksWithHighPageCount and GetBooksWithDimensions(Width, Height), as they were trying to write some software that automatically found a nice way to stack their bookshelf while also maintaining author name ordering where possible.

My bibliography didn’t have the dimension or page count data, and adding it seemed wrong. I didn’t need those functions, and they would have just cluttered my beautiful API! So, I wanted to make it such that they could add their own data about the books, in the same way I added notes, but I also had to make it in a way that I didn’t need to share my note data when sharing the bibliography data, as I didn’t want them reading my notes.

Refactoring hygiene

Before refactoring, you should always have tests to prove your actions haven’t broken anything. I wrote a few use cases to generate output data to confirm things were working. I used approval tests to verify each refactoring step by comparing text output.

Use case of the old system
  library = Library()
  print("Books with notes, on the subject of programming")
  print_books(library, 
              library.GetBooksMatchingSubjectWithNotes(
                  "program"))
  print("Books by Takashi Iba")
  print_books(library, library.GetBooksByAuthor("Takashi Iba"))
  print("Early books by Christopher Alexander")
  print_books(library, library.GetBooksByAuthorInDateRange(
      "Christopher Alexander", 0, 2000))
  print("Books by others on architecture")
  print_books(library, library.GetBooksMatchingSubjectButNotByAuthor(
      "architecture", "Christopher Alexander"))
  print("Books from the 80s, sorted by RefID")
  print_books(library, library.GetBooksInDecadeSortedByRef(1980))

This approach might not work for your case, but you will need something that tests at a more abstract level than a typical unit test, such as a behaviour test, because this refactoring changes the API and the participating components.

3. The process

My problem module was a monolith with many methods. Internally, the module talked to a datastore containing data that was not strongly coupled but shared the same backing store and access point.

Starting Point Diagram

Any solution would include some way to GetBooksBySomeKindOfQuery(Query). I wanted to decouple all this, so I surveyed the problem. As a first step, I realised all the queries already operated on data I could get via the public API of the Book object. So, I started by extracting each function out into public free functions.

Step 1: Extract to free functions

Result of extracting methods into free functions.
def GetBooksMatchingSubjectWithNotes(selflibrary, match):
  return [book for book in selflibrary.booksGetBookList()
      if match.lower() in book.GetTitle().lower()
      and len(selflibrary.GetNotes(book))]
 
def GetBooksByAuthorInDateRange(selflibrary, author_name, start_date, end_date):
  return [book for book in selflibrary.booksGetBookList()
      if author_name in book.GetAuthors()
      and start_date < book.GetPublicationDate() < end_date]
 
def GetBooksMatchingSubjectButNotByAuthor(selflibrary, subject, author_name):
  return [book for book in selflibrary.booksGetBookList()
      if author_name not in book.GetAuthors()
      and subject.lower() in book.GetTitle().lower()]
 
def GetBooksInDecadeSortedByRef(selflibrary, decade_start):
    return sorted([book for book in selflibrary.booksGetBookList()
                   if decade_start <= book.GetPublicationDate()
                   and book.GetPublicationDate() < decade_start + 10],
                  key=lambda x: x.reference_id)

The usage of the methods changed, but only a little. Mostly, as is usual when you migrate to a free function, the object slides into the first argument.

Use case of the new free function version.
  library = Library()
  print("Books with notes, on the subject of programming")
  print_books(library,
              library.GetBooksMatchingSubjectWithNotes(
                  library, "program"))
  print("Books by others on architecture")
  print_books(library, library.GetBooksMatchingSubjectButNotByAuthor(
      library, "architecture", "Christopher Alexander"))
  print("Books from the 80s, sorted by RefID")
  print_books(library, library.
              GetBooksInDecadeSortedByRef(
                  library,
                  1980))

The system now looked more like this. There was still a monolith for accessing data, but all the coupling was firmly in the realm of my free functions.

Layering Step Diagram

Taking stock of the situation, I could now see a way forward for my second force. I had to split the data handling to provide my co-worker with a version without note data support. I needed to stop using the one big data class and split it into a Library for the books and a Notes object to hold my notes. Making this change was relatively easy, but I also took the step of removing all the finding methods from the library at the same time, as I knew I would not need them anymore.

Step 2: Decouple false-coupled data

Decoupling the API
def GetBooksMatchingSubjectWithNotes(library, notes, match):
  return [book for book in library.GetBookList()
      if match.lower() in book.GetTitle().lower()
      and len(librarynotes.GetNotes(book))]

The functions don’t look much different. The usage was still very similar, but I now had an extra parameter when I needed both books and notes in the query.

Use case of decoupled API
  library = Library()
  notes = Notes()
  print("Books with notes, on the subject of programming")
  print_books(librarynotes,
              GetBooksMatchingSubjectWithNotes(
                  library, notes, "program"))
  print("Books by others on architecture")
  print_books(librarynotes, GetBooksMatchingSubjectButNotByAuthor(
      library, "architecture", "Christopher Alexander"))
  print("Books from the 80s, sorted by RefID")
  print_books(librarynotes, GetBooksInDecadeSortedByRef(library, 1980))

The system was now entirely decoupled in terms of data stores. I could replace each store independently without causing any changes to propagate through the system.

Decoupled Diagram

Turning point

I really could have stopped there if time had been very tight. My co-worker would have been able to use the book-related operations and the Library class. My Note code could be stripped and put in a separate file. But of course, it was quite ugly and still suffered from the first force problem. Adding new queries would not be easy, and copy-pasting would remain error-prone.

I saw all the patterns of repetition clearly. Most of these functions had the same return type as the Library.GetBookList() method. So, I turned them into filters, filtering that return value.

Step 3: Refactor to filter operation

Refactor to filters
def WithNotes(books, notes):
  return filter(lambda book: len(notes.GetNotes(book)), books)

def ByAuthor(books, author_name):
  return filter(lambda book: author_name in book.GetAuthors(), books)

def InDateRange(books, start_date, end_date):
  return filter(lambda book: start_date <= book.GetPublicationDate() <= end_date, books)

def NotByAuthor(books, author_name):
  return filter(lambda book: author_name not in book.GetAuthors(), books)

def MatchingSubject(books, match_string):
  return filter(lambda book: match_string.lower() in book.GetTitle().lower(), books)

def SortedByRef(books):
  return sorted(books, key=lambda book: book.reference_id)

def InDecade(books, decade_start):
  return InDateRange(books, decade_start, decade_start+10)

This also meant I only had to push lists of books into the calls rather than provide the whole Library object. I splintered off the filters that were intersections. I could then rewrite some of my earlier queries in a more reusable manner.

Use case with filters
  library = Library()
  notes = Notes()
  print("Books with notes, on the subject of programming")
  print_books(notes,
              GetBooksWithNotes(MatchingSubjectWithNotes(
                  library.GetBookList(), notes, 
                  "program"),
                        notes))
  print("Books by others on architecture")
  print_books(notes, GetBooksMatchingSubjectBut(NotByAuthor(
      library, "architecture".GetBookList(), "Christopher Alexander"), "architecture"))
  print("Books from the 80s, sorted by RefID")
  print_books(notes, GetBooksInDecadeSortedByRef(InDecade(library.GetBookList(), 1980)))

But then, it seemed a bit funny that I needed a function for NotByAuthor as well as ByAuthor. But there’s no way to un-filter a list. Again, I noted the repetition in each filter function and decided to keep filtering, but only once, and find a way to join those filters together.

Step 4: Refactor filters to specs

The Specification pattern loosely means using an object (a spec) as a predicate. You can evaluate it and get a boolean result given an agreed input. Constructing a spec creates a test you can run later. Constructor arguments set up a spec object to deliver a verdict. Its behaviour will generally stay the same once constructed. So, think of a spec as a way to judge another object.

For this step, I decided to implement my spec objects as lambdas. Instead of filtering by a NotByAuthor(author_name) spec, I would filter by a Not(spec) spec, which was a kind of Decorator or Wrapper over the ByAuthor(author_name) spec.

Refactor to specs (in this case, lambdas)
def WithNotes(books, notes):
  return filter(lambda book: len(notes.GetNotes(book)), books)
 
def ByAuthor(books, author_name):
  return filter(lambda book: author_name in book.GetAuthors(), books)
 
def InDateRange(books, start_date, end_date):
  return filter(lambda book: start_date <= book.GetPublicationDate() <= end_date, books)
 
def NotByAuthor(books, author_name):
  return filter(lambda book: author_name not in book.GetAuthors(), books)
 
def MatchingSubject(books, match_string):
  return filter(lambda book: match_string.lower() in book.GetTitle().lower(), books)
 
def SortedByRef(books):
  return sorted(books, key=lambda book: book.reference_id)
 
def InDecade(books, decade_start):
  return InDateRange(books, decade_start, decade_start+10)
 
def And(a, b):
  return lambda book: a(book) and b(book)
 
def Not(a):
  return lambda book: not a(book)

I needed to construct the WithNotes spec with the capacity to verify against the note list. I also needed to construct the ByAuthor spec with an author name. The verification of the author happens later, but the object (the lambda) is pending, not actually running until later.

Something strange was going on with WithNotes because it linked the two data stores together. It felt like an SQL query where I would join two tables. In any small filtering language like this, there may be times when you realise you need to think about whether you want to work with the data as a document store or a relational database. Each has trade-offs. In my case, I realised that I would be satisfied keeping them separate and treating them as tables that must be joined.

Specs Can Join

Some prefer a document store approach. Such an approach would mean the data would become coupled again. Coupling can boost performance because you can distribute document processing, but distributing is not always quicker. The choice comes down to the specific problem you’re trying to solve. My problem was coupling, so a spec-level join was my preference.

Of all the free functions, the odd one out was SortedByRef(books), which was not a spec but a post-process on the output of the filtering operation. Sorting the data from a query felt familiar to me at the time. Whenever I queried a database, I would typically SELECT from some tables, have some form of filtering in the WHERE clause and then have an ORDER BY as the last step. The final filter was emulating an ordering step. This might indicate there could also be grouping filters.

So, my use case now looked like this:

Use case with spec objects (lambdas)
  library = Library()
  notes = Notes()
  print("Books with notes, on the subject of programming")
  print_books(notes,
              filter(
                  And(WithNotes(notes),
                      MatchingSubject("program")),
                  library.GetBookList(),
                  "program"),
                        notes))
  print("Books by others on architecture")
  print_books(notes, filter(
    And(MatchingSubject("architecture"),
      Not(ByAuthor("Christopher Alexander"))),
    library.GetBookList(), "Christopher Alexander"), "architecture"))
  print("Books from the 80s, sorted by RefID")
  print_books(notes, SortedByRef(
      filter(InDecade(1980), library.GetBookList(), 1980)))

I could see how this way of writing queries would be highly extensible. It’s open to any possible usage. However, it’s very raw, so I took the repeating pattern of filtering on the result of the Library object’s GetBookList and put the common code into a new method in the library.

New get books method in Library class
  def GetBooks(self, spec):
    return filter(spec, self.books)

Once complete, the final use case no longer needs to be concerned with filtering, just the construction of predicates.

Final use case
  library = Library()
  notes = Notes()
  print("Books with notes, on the subject of programming")
  print_books(notes,
              filterlibrary.GetBooks(
                  And(WithNotes(notes),
                      MatchingSubject("program")),
                  library.GetBookList()))
  print("Books by others on architecture")
  print_books(notes, filterlibrary.GetBooks(
    And(MatchingSubject("architecture"),
      Not(ByAuthor("Christopher Alexander"))),
    library.GetBookList()))
  print("Books from the 80s, sorted by RefID")
  print_books(notes, SortedByRef(
      filterlibrary.GetBooks(InDecade(1980), library.GetBookList())))

The result of this work allowed my co-worker to write a spec that didn’t even use the book reference ID. In addition, they sorted by author name in a more traditional way.

New features
def ShorterThan(dimensions, upper_limit):
    return lambda x: dimensions.get(x.GetTitle()).Height() < upper_limit

def SortedByAuthorLastName(books):
    return sorted(books, key=lambda x: x.GetAuthor()[0].split(" ")[-1])

Final thoughts

It doesn’t matter what language you write in; so long as you can build up a chain of operations, you can use this spec-tree pattern to resolve queries or other problems that look like a small language.

Address the inhibiting elements

We’re not just fixing patterns here, either. We’re attempting to fix all the relevant parts of the puzzle. Can we also steer a better path against the regression of Agile and DevOps? Can we inoculate ourselves against the feedback of the ego-driven predator pattern?

There are elements of the world of software development inhibiting software engineering. In The Nature of Order, Book 2[NoO2-02], on page 528, Christopher Alexander listed many elements from the physical construction world that inhibited his wholesome building construction process. I reproduce them here:

  • The process of banking
  • The control and regulation of money
  • The way money flows through a project
  • The conditions in which risk is deployed
  • The process of development
  • Speculation in land
  • Construction contracts
  • The role of architects and engineers
  • Organisation of construction companies
  • The nature of planning
  • The nature of master plans
  • The nature of construction contracts
  • The process of ecological evaluation
  • Evaluation by lending institutions
  • Architectural competitions
  • The size and scope of architect’s work
  • The teaching of architecture
  • The priorities of manufacturers
  • Building codes and regulations
  • The role of town planners
  • The mortgage process
  • The process of housing ownership
  • Control over housing
  • Ownership of public land and streets
  • Protection of the wilderness

These elements he listed interfere with the wholesome process of physical construction. In some projects, he tackled some of these problems head-on. In others, he avoided them by careful selection of clients. Fixing the process and removing all the impediments might not be possible in many lifetimes. But knowing it can be better is step one. A better software process is possible. One that is agile allows for a more effective introduction of DevOps practices and creates better products with fewer resources.

If we look to Alexander’s list for inspiration, many elements can be brought across wholesale into software development. Some of the items on the list need conversion from the domain of physical architecture to development. Have a go yourself, but here’s one attempt to produce a list of elements inhibiting software development using copies, adjustments, and additions to Alexander’s list:

  • The control and regulation of money
  • The way money flows through a project
  • The process of development
  • Copy-paste, asset flip, and trend-chasing development
  • Publisher/client contracts
  • The role of designers or project managers and engineers
  • Organisation of development studios
  • The nature of planning
  • The nature of master plans and schedules
  • The process of moral and political obligation
  • NDC, GDC, FOSSDEM, PYCON, CPPCON, and other big events
  • The size and scope of software architects’ or designers’ work
  • The teaching of software engineering
  • The priorities of tool providers
  • The lack of anything akin to building codes and regulations
  • The role of vendors and platform providers
  • The process of purchasing software
  • The distribution of authority to make decisions
  • Patent law

What this tells us about our problem is that you must apply systems theory to achieve anything lasting. There are too many elements that will push back against a change. Each thing alone is not to blame, but combined, they are a formidable force. Each brings a small policy resistance effect. Each brings a sense and feedback leading to actions undoing your work.

To resolve the problems with agile processes, we must find a way to make the good side of Agile visible to those who need to relinquish power. Who are they? Look at the list. They are schedules, contracts, money flow, and trend-chasing, and even events play a role.

When your organisation tries to keep every developer busy rather than keep the development pace at maximum, you have to look at what forces people to look busy rather than be effective. Those things are the roles we play, the size and scope of the projects we are on, the contracts we write, and even how we decide how many hours we work and how we evaluate and compensate developers.

As a concrete example, the above elements create a weird incentive to work slower. Most management punishes workers for working fast. Don’t believe me? Well, consider this: if I, as a worker, can finish a job in half of the time you expected it to take, you would most likely wish to pay me only half of the price for the job. We expect to pay for work by the hour, regardless of its value to us. But, if the work is completed satisfactorily in a fraction of the expected time, and simply not delivered, an employer would be none the wiser and would willingly pay the full amount. You would receive the product later than you could, and still be willing to pay more. Something is very wrong when we reward people for doing a worse job or taking longer to complete a task.

We need to concentrate on the inhibiting elements. The workers are fine. It’s the rewards and expectations that are broken.

Moving beyond image-driven

As an example of fighting the antagonistic forces, perhaps we can think about how to address the elements involved in the image-driven construction process. We know much of what was before must somehow be preserved. What makes up our day-to-day work and the necessities of development must be kept. The image-driven process is linked to the movement of money, rewards, contracts, and the size and scope of projects. Without the image-driven approach, it might be impossible to maintain large teams doing massive projects. But is that a problem, or is it an indicator of the unsustainability of our process?

How could you create an airport without some big-money investor driving the process? An airport is desired by those running airlines. It is also wanted by those wishing to travel or ply their trade by air. As a second-order effect, it is also desired by those wishing to set up shop within or near the airport for commercial trade or to offer transport services. Many people want the airport so they can provide goods and services to those who use it. But even with all these people hungering for it, who would build it without the up-front money to buy labour?

The answer must be those who want it. But how can so many small actors drive the creation of a thing as large as an airport without it causing them to go bankrupt before it’s finished and worth anything to them? A small newsagent can’t wait five years to sell their first newspaper to the first transatlantic traveller. A taxi firm cannot invest in hope of passengers who won’t book their first ride until the owner’s newborn has spent their first day at school. Surely, waiting around for an airport to be complete cannot work.

Perhaps we’ve missed something important. We’re assuming the airport needs to be completed to be valuable. This is the same assumption people often make about evolution. They ask, ‘How can eyes or wings evolve when partially finished eyes and unfinished wings are useless to the creature?’ to which we respond that unfinished wings make rather good parachutes, and partial eyes still warn you of overhead predators.

The default position of creating the product in one step is an emergent property of how we work now, where the big-money option is present. Do we need to build a massive international airport in one step? Or do we make a smaller one that serves the local people and expand it over time? The development must start from a worthwhile first step, and each step must lead to a better airport.

Perhaps instead of a gradual change in capacity, we co-opt something else. For example, one of the busiest airports in the world, Heathrow Airport in the UK, started as a small airfield, was developed into a larger military airport by the government during WWII, and then switched to a civil airport as the war ended.

The largest airports in many countries started as small ones. One runway turns to two or three or more. Land needs to be bought and developed upon later, which can be painful, but not fatal like a half-finished, overly expensive project. And, if the needs of the people change, the plans change, and losses of a grand scale are not realised. It is better to have an expensive but safe path to expansion than a cheap and efficient path to ruin.

The distribution of authority to make decisions

The management of anything needs to be at the correct scale. Making rules for the layer below is usually allowed but often a bad idea. Local decision-making works at many scales. The gardener knows best how to make their shed work for them, and the chef knows how their kitchen should be. The mayor knows best what building to construct or whether to spend taxes on road maintenance. The government knows best how much to spend on health care or the military.

But only at the correct scale. The state or country might know how much to spend on the military, but they wouldn’t dare specify how much to spend on bullets. The chef might know the kitchen but cannot be trusted to know how much should be spent on funding health and safety inspection organisations. Everyone says the correct standard is just a little less than what they are doing.

Big government isn’t bad because it’s large. It’s ineffective when it imposes rules outside of its locality of scale. When anyone makes decisions outside the scope of their day-to-day understanding, where they cannot feel the pinch and pull or pressure, they will make mistakes. When anyone makes decisions where they are the direct beneficiary, the decisions cannot be made disinterestedly.

Even when intended to protect, rules imposed from above often cause harmful downstream effects. Imposing local laws at the global level does not work either. We need to distribute decision-making authority to those able to enforce it through local wisdom. And we need the wisdom to know which domain is local and which is remote.

What Can We Do Now?

We’ve covered a lot of ground, and you probably have ideas of your own for what we can do to make things better. I have a few thoughts, so this is where you get to read what I think we can do. We’re now in the final chapter, and although it is a bit of a list, hopefully, you will be inspired to improve how you interact with patterns and processes at the conclusion.

The following sections will suggest you:

  • Concentrate on pattern languages, not patterns.
  • Learn to recognise patterns better.
  • Ensure you understand the criteria for a pattern.
  • Question the existing patterns.
  • Gain and train your discernment skills.
  • Look for patterns in unexpected places.
  • Always consider systems theory.
  • Take lessons from the history of design patterns.
  • Move away from adding, and attempt to unfold your designs.
  • Name patterns better.
  • Learn to write well.
  • Learn to elicit the deeper needs of the client.

Learn to recognise pattern languages

Learning to recognise patterns was part of the original movement, but identifying where languages might be hiding could be even more worthwhile. We should move away from the pattern-hunting mode of the original movement and instead look for pattern languages that are either already present or growing in a new domain. To do this, we should hunt for big, hairy problems rather than clever solutions.

We should now know we cannot look for a pattern language in a programming language, as there is no high-level problem to solve. Pattern languages solve problems. Python, JavaScript, and C++ are not problems. I may need to rethink that last statement … Regardless, they do not present a well-defined problem that can be solved. They are the tools to solve problems, as are frameworks and libraries such as Rails, React, Boost, pandas, and Flask. None of these are problems.

Within a problem domain, look for a pattern language. However, do not name it; instead, look at the existing practices and bring some pattern structure to the language. You can try mining for new patterns or redefining the tutorials and how-to guides in the documentation into categories of context, problem to be solved, and related problems and consequences.

Look to the work of Takashi Iba on pattern mining[Iba2021b] and pattern language writing[IbaHtWP21]. I have read three of their pattern language books, and each shows a set of patterns forming a language that solves a problem. The first book I read was Presentation Patterns[PresentationP14], which includes a pattern language for developing presentations. It contains many patterns that also apply to writing, but the point of a pattern language is to concentrate on a particular problem.

Their other books, Collaboration Patterns[CollaborationP14] and Learning Patterns[LearningP14], show the same structured approach to mining, refining, and structuring. In each case, the problem was mined by talking to many people who solved these kinds of problems. The results were collected and organised to find repeating and related elements. Core actions were identified, and patterns to support these actions were then connected to them.

If we look at the creation of websites, we can use Iba’s method to mine for all the core actions of a web developer—the design and creation of the database and the design of the user flow and page views. I am not a web developer, but what little I know leads me to believe that many common core activities have project-specific determinations waiting for resolution.

We can start from the realm of existing documentation. Documentation exists to solve problems. From there, new pattern languages can emerge. Those with the discernment for patterns can detect them. They can invite those with the documenting skills to elevate the content.

Learn to recognise patterns

Recognise patterns and translate them into a better and more powerful representation. I want to do the reverse of Christopher Alexander and publish this book—my Timeless Way of Building—before I write and publish my A Pattern Language. A list of patterns in a new form would be good, but not without the groundwork firmly in place.

A pattern as a procedure, a transform, a process

A pattern is not just the recognition of a potentially better situation but of that and the operation in tandem. It’s the recognition of a way to transform and often a process that produces itself. We must shed light on the steps, not just shine a beam on the result of its application. When we recognise patterns, we need to reveal the original discomforts that induced these resolving actions.

We must perceive the lack that preceded the pattern to understand the forces at play. Without awareness of the pain the context was causing, we cannot be sure we have satisfied the situation to the highest degree. Neither will the pattern user.

Patterns are present at all scales and in all networks of interaction, not just at a single level. Even a single pattern might apply across multiple levels or be appropriate across various domains. The types of patterns are numerous and far-reaching because all it takes is for a self-supporting sequence of operations to come about repeatedly for them to form. So, where could we look for them?

Product patterns

Many patterns we can verify because elements of the world created for us are product-centric. The product can be an application or the way in which that product is communicated to the potential users.

  • UI and UX patterns, solving problems of understanding the user’s mental model and guiding them through your app.
  • Patterns of build processes, how to reduce wasted time and get the best feedback as fast as possible.
  • Deployment and delivery patterns, delighting customers and resolving problems before they happen.
  • Patterns for paring back and cutting features, trimming projects to the right size.
  • Patterns of building for the life of the customer, becoming a part of a user’s life and gaining great credit through extended word-of-mouth.
  • Marketing patterns, getting to those who would care and how to avoid negative reviews.

Organisation patterns

Other patterns we can verify because they are about how we work and live. They are related to social groups, connections, and communication patterns.

  • Hiring, teamwork, training, and firing patterns, improving the sense of community and avoiding hard-to-resolve mistakes.
  • Patterns of retaining talent or investing in people, solving the problem of knowledge loss and immaturity in teams.
  • Research, knowledge, learning, and teaching patterns, pedagogy for all.
  • Patterns of organisational sizes and direction, ensuring teams are motivated by the goals of the organisation by accepting the possibility of change in either party.

Process patterns

Some patterns can be sought in the way we work. How we grow, change, evolve. How we reward and how we respond to failures of our systems. Most of these patterns will need to exploit an understanding of systems theory.

  • Development process patterns, how to reduce waste and increase pace, how to engage everyone in creating value.
  • Patterns of extending and new versions, how to avoid alienating your customer while engaging the new.
  • Scheduling patterns and sign-off patterns, avoiding costly extensive tick-box exercises that inhibit fast turnaround.
  • Handling permissions, secrets, and licenses, that is, the patterns of stress-free law abidance.

Refactoring and maintenance patterns

The least verifiable patterns will be those of the construction process and sub-products. Source code and processes to verify and validate source code will need careful attention.

  • Code structuring and migration patterns, how to structure for easy, safe change and future comprehensibility.
  • Patterns of debugging, solving problems, learning, and preventing so bugs stay fixed.
  • Maintenance patterns, enabling the future to respond as fast as the past.
  • Patterns of planning and refactoring, that is, approaches to impact our willingness to make changes and reduce the apparent waste of effort created by detailed, far-distant plans.
  • Patterns of maintaining live services and continuous improvement, for when you have to keep the service up and running no matter the weather.

Where else?

New patterns will emerge as people change and what they value changes. It is a timeless way, not because the patterns are timeless but because people interact and have needs, and though the possibilities are endless, some configurations recur with great regularity.

We should look for new movement patterns, in both leisure and work. Patterns emerge from the configuration of space and the natural interaction of people. The modern situation of a train station is simply a configuration of space from which the motion of people emerges. There is something timeless, like the application of the laws of physics, but it is contemporary in its realisation.

Judge principles

When reading other works with programming principles, consider where they come from. Are they principles, or are they patterns in disguise? Sometimes, principles are just principles. Keeping functions short and using descriptive names for identifiers are guidelines for how to do the work. However, some principles emerge from momentary forces in contexts.

I think some of the SOLID principles might be design patterns. For example, the single responsibility principle resolves some forces of code maintenance in the context of recognisable repetition in code. It’s not a solution, but it does constrain solutions, so it feels more like an Alexandrian pattern, a simple one that helps guide to a better set of solutions to many situations. This is also one of those patterns that applies at many levels; it’s both language and domain-agnostic.

Interface segregation seems like a property to me, not a pattern. It is a positive attribute you see when unfolding a design under the forces introduced by new use cases. The value in this property is the recognition of the problem caused by having one and only one interface. The accidental coupling is revealed to the reader of the principle in much the same way important moments of human lives are revealed by many Alexandrian patterns.

Dependency inversion is a pattern in my eyes. The concept does not feel like a principle, but it’s quite easily interpreted as a process to reduce coupling when the forces of variation are present. Knowing this pattern leads to many other patterns where options must be introduced without giving knowledge to third parties who should remain indifferent and disconnected.

Look for forces

Patterns exist as a response to problems. Problems exert forces on elements of a context. Look for those forces. The pain points are what differentiates a pattern from a set of instructions. Seek them out in bug reports and feature requests. Look for them in the requirements and features of languages. A programming language might not be a good place to find a pattern language, but it can spawn a pattern by its limitations or opportunities.

When you are ready, learn about the many forms of waste in lean manufacturing, as waste is an ever-present force in most organisations. DevOps is an extension of recognising waste in IT and provides many clues to guide pattern recognition for processes.

Internalise the criteria

Examples of the solutions in design patterns are found in source code for some languages but in features for others. Strategy, Iterator, Prototype, Factory Method, Whole-Part, and Chain of Responsibility all turn up frequently outside of object-oriented code, even when people don’t know about them. This indicates that patterns are hiding near them as the solutions are self-forming. Their presence as primitives in a language doesn’t make them less of a pattern but more. They were valuable and obvious enough to be internalised into a language.

First criterion – They are real

It’s only a pattern if you can give an example. This first criterion should be obvious, but some examples from the published works disagree. So, I shall start with this and claim that examples prove a pattern can be enacted. They provide a starting point for a mental image. Examples allow a pattern description to explain extension or adjustment to match different contexts.

Christopher Alexander claimed, ‘If you can’t draw a diagram of it, it isn’t a pattern’[TTWoB79]. I believe code design patterns need source code, and management patterns need anecdotes, because all patterns need a few structured examples. Buildings need diagrams because they present forms. Code patterns need source code because they illustrate relationships of code forms. Management patterns need anecdotes because they represent timelines of people’s lives. The examples should not only be from where the pattern was applied correctly but also from where they were misapplied or absent. When we learn to use patterns, we learn how to repair things.

Look for evidence that patterns are true and real, but also look for evidence that they are false. Examples such as Functionality Ala Carte and Command have a suspicious lack of insight. Recurring, naturally emerging problems and bugs are curiously missing from their documentation. This is evidence that the pattern was not worked with deeply, or that it was never truly implemented, or important details were omitted for unknown reasons. In this case, reject the pattern as incomplete.

Second criterion – A process and a thing

So, the second criterion for a pattern is a clear description of a state of stress or disrepair. A diagram or code example will help, or a story about the pain can set the scene. The reason why the pattern is applied must be apparent, not just what the solution looks like.

Many so-called design patterns are just wrapped-up solutions. They either don’t help at the root or are not necessary. For example, Memento isn’t a pattern, as it doesn’t have value in itself and doesn’t balance forces. In every case I encountered, the Memento pattern was the data store element of the undo pattern, but it’s not a pattern. It does not form without the undo pattern1 and does not form without command, so it is merely an element of those patterns. There are other instances of something like the Memento pattern if you count serialising objects and de-serialising them back into life, but that’s not what was written into the original pattern.

Another problem with Memento is that it’s incomplete. For example, when you have a large object, why does the pattern suggest requesting a snapshot and then committing the action? This must surely lead to the situation where the Memento must represent all the information necessary to revert any future operation. The implementation section presents this limitation as something which is the responsibility of the command, but why? Either the command knows the target much too intimately, or it will store too much information, or possibly the target is revealing too many different contextual memento creation methods.

When a method on the target can return a Memento, this constraint evaporates. When you design the affected object so it can shallow copy, the constraint vanishes for different reasons. The pattern does not offer either of these solutions and, thus, is unfinished.

A pattern interacts with other patterns. The way Composite interacts with Visitor or Interpreter is weakly defined. The pattern of Pipes-and-filters works well with pure functional transforms and the principle of avoiding global state for concurrency. To interact is not to rely on but to contrast and cooperate.

Third criterion – They are wholesome

Beware of unfinished patterns. Refine them and seek out all the edges. If you don’t, you risk stating there is no solution where one exists. The third criterion is that a pattern stands alone but is related to other patterns.

Reading a file line by line is a pattern repeatedly observed since UNIX’s first years. Formatting strings in printing functions, finite state machines, handing out agents or handles to users of APIs rather than raw memory access, and cursors for query languages are all patterns that we see in programming but that are missing from the GoF book[GoF94]. The criterion for a pattern there seems to be the repeated finding of an object-oriented alternative to an existing solution outside the object-oriented space.

Fourth criterion – They address problems of design

There are non-object-oriented ways to express most of the GoF book patterns. GoF patterns that can be reinterpreted in this way could be considered idioms of an object-oriented design approach to the problem they address. The fourth criterion is that patterns are problem-centric. A pattern or pattern language is not paradigm-specific. It must be centred around solving a problem, not just offering an alternative solution using different tools.

Fifth criterion – They are responsive

The fifth criterion is related to emergence. When people latched onto the idea of patterns, they failed to attribute enough importance to the concept of forces. They understood the context part well, so they made that the way to identify where a pattern fits. If the context is correct but the forces don’t match, it’s not a pattern. It can still be a solution or simply something to be expected. Patterns generate solutions. Forces often generate patterns. If a recurring pattern cannot be found at the centre of a given set of forces, it’s more likely someone’s attempt at a pattern.

For example, I am a primitive human living in the jungle. I know how to make and use string and create tools from raw materials. This is my context. I have access to wood, flint, vines, and other resources. These are also part of my context. There are animals all around the forest. Some fly, some climb, and some move around at ground level. These are all part of the context. This context does not elicit any patterns, as there are no forces. It’s a stage upon which forces will play, but it does not suggest anything independently without first identifying the forces at play.

The forces I add to the system are: I am a hungry human who can scrounge for nuts and berries, but they are not plentiful at ground level. I am big and clever, but I cannot catch the small, weak animals at ground level, as they are too fast, and I need help to reach the ones that climb or fly.

We can see how a few patterns emerged once I added forces to this context. We will make tools to create ways to reach the fruit or animals beyond our reach, set traps for the skittish animals eluding our grasp, or even invent the bow and arrow. These are patterns because they are recurring solutions that resolve forces in a context. They have objective criteria of both context and forces.

Forces are the value of success and the cost of failure for each contextual relationship. The animals want to survive, so they will flee humans as they approach, but we need to get close to catch them so extend ourselves by trap or ranged weapon to resolve the conflicting forces. To get at the fruit higher up, because humans cannot reach that high and free climbing comes with greater and greater risk the higher we climb, we build ladders or climbing gear to reach great heights more safely, resolving those conflicting forces.

It’s the unresolved forces that drive the appearance of patterns. This is the last criterion.

  1. They are real: there are real examples.
  2. They are a process and a thing: the state of distress is well described. They repair and make things better.
  3. They are wholesome: they are both fully contained and related to other patterns. They support, not inhibit, the resolution of neighbouring problems.
  4. They address problems of design: a pattern or pattern language is not paradigm-specific and not an implementation.
  5. They are responsive: patterns must be emergent from the unresolved forces and resolve them.
1

The memento iterator pattern in the GoF book is quite a stretch and is almost an anti-pattern as it splits lifetime responsibility between two separate objects, hence my discounting it.

Question the patterns you know about

Refine those you know already

Your journey to using patterns better has already started. Reimagining the patterns you know into a problem-centred form will elevate your ability to detect new patterns. Question the patterns you already know about, both their usage and existence. Look for the way patterns were extended without fanfare, such as the routing pattern. Re-read old patterns and try to interpret them in new languages without falling into the ‘strategy is just first-class functions’ trap. Patterns are not features; you know that now. They are wisdom. There is no hidden wisdom in ‘first-class functions’.

Find different ways to classify patterns you learn about. Don’t assume they exist in one hierarchy. Look for different ways to map them against each other. Each pattern resolves a problem but may cause tension elsewhere. The downsides are often ignored or unknown, but knowing when not to use a pattern is more powerful than knowing when you can.

Learn a few more patterns and rewrite them

Learn a few more patterns from any of the published works. Thousands of software design patterns were available at the turn of the century. There are possibly over 10,000 now. It doesn’t matter where you find them, rewrite them. The last Pattern-Oriented Software Architecture[POSA5-07] book drives home some great points about finding patterns and verifying their utility.

The GoF book[GoF94] contained many recurring solutions to problems but did not introduce a pattern language. Rewriting the GoF book in a problem-centred way would allow more people to find, extend, and use the patterns already out there. Rewriting it as a way to migrate a codebase from procedural to object-oriented could help migrate it to a pattern language. This is what the book Refactoring to Patterns[RtP04] is—a rewrite for a problem-centred audience, taking legacy code and improving it.

As mentioned elsewhere, the main pattern in the GoF book was to replace procedural calls with objects. Where a function pointer or parameter would have previously dynamically defined the behaviour, many solutions in the book used an object to provide that variability. Other aspects of the GoF book could be interpreted as variations on the theme of decoupling via inheritance. In effect, the nominalisation of actions means the book reinvented callbacks as objects, meaning they had stronger guarantees in the languages available then.

Other books on design patterns also missed the language aspect. Find the problem they are trying to solve. Look at the structure of the problem and see if you can’t find the language hidden inside.

Observe better phenomena

Wisdom helps you see things. Patterns help you see things if you know about them. When you don’t, you can still think about what might be overlooked and use it to guide you to thinking about what phenomena currently slip by undetected.

In range

In range means close enough to see. You might not be able to see what’s wrong from 100m away, or you might only be able to see from a vantage point placed even further. One millimetre is too close to see what’s causing some problems but not close enough for others. The thing you perceive will depend on how far away it is. This is like the wood for the trees saying. From the window of the oppressive castle, the village can look happy and idyllic.

Some development methodologies overlook this aspect of range. In Scrum, they take a perspective of nearby in terms of the product owner—the most important thing to them right now. It can be challenging when structural changes are essential but never the next most important thing until they are suddenly late and urgent.

In other development methodologies, the range can be much greater. The goal can be quite far away, and all the problems on the way there become invisible at that scale. Unexpected issues are bound to turn up. Here, the lack of local observation leads to fewer high-level mistakes, but with so many local problems, it has a higher ultimate cost.

Another extreme is the bazaar-style approach to developing an established software product1. It has many contributors and one director. Each contributor will have an idea of what will make the product better locally but will likely only have a very myopic perspective of the whole product. The contributors are all highly tuned end users with different use cases and priorities. This is why the bazaar methodology requires a leader who is aware of the general situation for all and has a well-restrained ego. Their ego would fight against the best outcomes, as it would put the value of identity over the product’s value. In bazaar development, the recognition and identity of each contributor is often quite important and must be respected by those with their hand on the tiller.

Domain contrast

Due to our perspective, what we care about also indicates what we don’t or can’t care about. We need the UX team to feel they can tell the product designers when the experience of their users is no good. With critical feedback, we can avoid introducing complicated workflows. We need the security team to be able to contribute to our design process, so we don’t miss legal requirements with far-reaching implementation implications. Our programmers must inform content creators when the assets they create are too expensive for the target platform. Without these alerts, much work remains hidden until too late.

Sometimes, we need an astute outsider or intermediary to help those on the inside perceive. Outstanding issues must be translated and communicated to those who cannot sense them. The intermediary can be a person, a specialist or consultant, or simply someone added to a team to be the eyes for a particular domain. Other times, it can be a program we regularly run or an active indicator generating relevant and real-time notifications in a tool we use.

What you always see

Many works on software development will have some assumptions built in. Older works assume a procedural programming paradigm. Modern books mostly assume an object-oriented architecture. Future books may assume functional programming is the norm. That’s an unknown for me. But notice they all agree on programmers programming in a programming language.

One realisation I have had over the last few years is that there are two main types of programmers: the specialist programmer, who is well-versed in a piece of software, a language, or a subject matter such as algorithm design; and the glue programmer, who is more of a generalist but plugs things together nicely. Each type has strengths and weaknesses. When I first started programming, there was only one type of programmer: the programmer who did it all. There were more types around; I just couldn’t tell because I didn’t know how to discern between them.

I recognise the specialist in me. I might know quite a few different things due to the varied demands and the nature of game development, but in general I know only a few things but know them deeply. I find I need a lot of help to learn a living codebase quickly. Those who can pick up new things fast are more suited to the application developer role. They might not have the attention required to internalise complicated connected knowledge, but they have the mental flexibility to jump between tasks, levels, and roles, which makes them indispensable in a modern application development shop.

This leads back to the section title. We always see something and assume it’s the norm. I saw programmers who did a bit of everything and thought all programmers were like that. We assume how things are is how they should be, but that’s wrong on two counts. Many people learn programming these days, but not many are suited to become specialists or management, and there’s no uplifting career track for people who build lots of small things well and fast. The way in which we compensate people is stuck in the values of the past, stuck in what we always see.

It’s true everywhere, not just in software development, even in the works of Christopher Alexander. Recognisably, Western patterns dominate A Pattern Language[APL77]. Identifiable landmarks mainly refer to villages, towns, cities, houses, workshops, and other common recurring elements of the Western world. I’m not trying to put down the work; I’m sure he was aware how the work captured what was normal to his eyes as well as capturing the patterns. In later works, Christopher Alexander rectified these mistakes and leant on more fundamental forms rather than what was merely normal from his perspective, and from that, we gain a deeper understanding of what makes the whole world harmonious and wholesome.

If we continue writing about how things are rather than finding a formula that works regardless of the variables inside it, then we are doomed to find a mirror for ourselves. You can learn much from mistakes, less from success, and even less from a successful repetition of a prior event.

What you cannot even see

Darkness is an absence of light. If you have never experienced light, you will find it difficult to comprehend the impact of darkness on people. You can understand it intellectually, but the closest approximation might be the loss of perception when you are effectively deafened by being in a loud environment.

The problem is we are missing the extra information only perceptible when we have internalised the connections and meanings of the observable facts. For example, when listening to a conversation in a foreign tongue; you cannot decipher the content or meaning. It’s often possible to speculate on the mood of the conversation without understanding the language, but even then, it’s still guesswork.

Whatever you are unable to experience, for whatever reason, denies you the chance to see what is missing, even if it is affecting you. Just as someone completely blind is unlikely to be afraid of the dark, someone with no experience in concurrent or parallel processing will likely not fear race conditions.

Becoming conscious of higher-level concepts is only possible when you can perceive and understand the foundations. When you don’t understand basic mathematics, there’s little chance of recognising the impact of statistics and probability on your decision-making.

All this is also why the principles from the Agile Manifesto[AM01] can be so beneficial. We learn more details about the problems inherent in the project sooner. In many cases, working programs are prioritised over documentation and contemplation, leading to seeing systems interacting live. Early working code lifts the blindfold on how things interact. Humans are good at guessing how simple single things turn out but often fail to predict how systems work or how exponential growth pans out. These are things to which we are naturally blind. Therefore anything allowing us to see them or their effect gives us an advantage.

Always ask questions about how a project will work in the end, not how each piece will work. I always prefer testing work when it is integrated rather than asking for acceptance in an isolated testbed.

Swimming pools for kids by the sea

The importance of things can sometimes only be seen by those close to the problem. People might ask why you need a swimming pool in a coastal town when the kids can swim in the sea instead. The problem isn’t apparent until you think about swimming lessons.

The more context and awareness of a problem you have, the fewer solutions will seem appropriate. When you do not understand or consider all the purposes of swimming pools, you may de-prioritise them where they are necessary and provide them where they are not. If those in charge don’t know why a community wants something, that doesn’t mean it’s frivolous. Understand the community. Make sure they know you want that education.

If a person in a leadership role doesn’t understand the impact of their decision to de-prioritise something, then it’s your duty to correct them, to help them see what happens when they commit to those actions. A computer game producer putting off localisation until all the strings in the game are finished might have to be informed of the costs incurred by their scheduling. A product owner delaying bug fixing may not know how technical debt accrues interest, how bugs get more complicated to fix the longer they hang around, and how tasks in a clean codebase are more accurately estimable. A project manager who delays optimisation may not know that earlier optimisation work leverages faster development or simplifies later optimisations.

Two things are going on in these situations. Disconnected experience in the domain means outcomes foretold by others aren’t foreseen by those making the decisions. And there’s an inability to see alternative usages from how you usually observe things. This aspect is not quite as bad as functional fixedness, but in the case of a swimming pool, the person making the decision may think it is solely for the purpose of recreation. And when you put the pool into the category of leisure and pleasure, the idea of cutting funding when there’s a beach nearby makes perfect, tragic sense.

When reviewing tasks and someone, not a decision-maker, pipes up with concerns, or even if you hear murmurings of discontent, listen harder. Ask questions revealing your missing knowledge. Without awareness, you will make bad decisions. Find the reason why there’s anxiety in the team and figure it into your decision-making process.

1

The Cathedral and the Bazaar (1999), by Eric S. Raymond.

Where to find them

We can find some pattern sources by looking at the human aspects of software. Those we find can be interpreted and assessed through the objective measure of wholeness. We can look at studio layout, UI, and the design of applications in general. We can look for patterns in human resources and patterns for rewards. Through the lens of survival, we can hunt for patterns providing a better working environment, both locally and at the scale of corporations. We can look for patterns of how we should develop processes, design code, and test our software. All these systems can influence and be influenced by other elements, hence the systems theory aspect.

The physical environment seems the most obvious. We have a long history of working in offices and have worked with other humans for thousands of years. We will have a sense of quality for these areas. We can trust some of our instincts on what will make things better. A Pattern Language[APL77] has some suggestions on this front, as work itself is not a new invention by Silicon Valley types—at least not until they reinvent it as a ‘side hustle using equipment borrowed from and backed by a like-minded investor who takes a large cut of your profit as long as you follow their guidance’.

The quasi-physical world of the end-user experience is the next most obvious. Applications are lived with. They need good geometry and follow the patterns of breaking symmetry to indicate information. Mostly, you can appraise them by the same overwhelming agreement method. Beyond feeling, we have many good rational metrics supporting us, such as measuring time and motion—but not for evil this time.

User stories are bountiful hunting grounds for patterns. There are undocumented patterns, such as Exit through the gift shop 1, which are retail design patterns. These are patterns of museums and other attractions that offer free entry. The forces at play are a need to supplement income without ruining the experience in any way.

Exit through the gift shop gives the patron the advertising-free experience they desire and invades their space with a request to consider showing some appreciation or buying a souvenir only at the last moment. We’ve seen that websites started doing this with the hover-out showing a ‘Please don’t leave’ message. There are likely other design patterns you can find based on the literal actions of your users. These are UX patterns. Look for the action, find the driving intention, and see if there’s a mental model you can infer.

In his book Hooked[Hooked14], Nir Eyal talks about ways to discover these patterns by considering the feelings people have and how they respond to them. Moving away from engaging users, we can still analyse their reactions to emotions to gauge the kinds of forces at play.

It should be possible to identify some patterns from a current situation via the negative forces that are present. Some lack or some present pain is often the motivation. Deduce from what is clearly or not so clearly missing. In the work on the Eishin Campus, ‘paths connecting classrooms exposed to rain’ and ‘separate classroom buildings’ were latent behind the ‘covered passages’ and ‘one building for all classrooms’ requirements naturally supposed by System B ([NoO2-02] page 366). Look at what you presume and think about why it’s assumed and what it means. Look at the opposite options. Remove an obvious requirement to identify whether change is needed. What nice things are killing you?

If you are in a monolithic codebase, the ability to reach all other code from anywhere is an anti-pattern but also a guide to the pain. Before splitting the code into smaller pieces, you will likely have long build times because monoliths tend to have lengthy build processes, collecting everything together at the last minute. These pains might not be noticeable until you consider what you don’t have. When making architectural decisions, we trade one pain for another, but often, there is a path to solving both when you look hard enough.

James Coplien’s commonality and variability analysis[MPDfC98] is a good source of patterns in software. Most GoF patterns fit this structure. We view something as a static attribute, so we introduce an object or abstraction to allow it to vary. You can often find patterns in software in how we deal with different kinds of design requests.

For example, when we must vary what we construct, we make our constructors into variables: the Factory Method. In C++, you can’t create a pointer to a new object constructor, so you must make the constructor into a function or an object. This is not an object-oriented design pattern, but an idiom of C++. The parameterisation of factory methods, however, is applicable in most languages, meaning the core pattern holds, even if the presented forms do not illustrate invariants of all solutions.

What is common to some data is a transform. What is common to some transforms is an algorithm. What is common to some algorithms is a paradigm of programming. What is common to different paradigms is goal definition. Commonality and variability analysis allows you to find patterns, no matter the scale, but it also helps you find the hierarchy in your language.

1

Yes, I made this one up, but it seems a reasonable pattern resolving forces in a context.

Use systems theory

The only way to beat a system is to understand it. If we want to develop our processes into healthier ones, we need to understand the system reinforcing those that harm us. There are many elements inhibiting a better way of developing software. Understand what supports those elements and work on changing the feedback loops. Only then can we control how we are affected by them.

This is true for not just the large but also smaller things. Not just for processes that create problems for people but also for the tiny details of deciding what hardware to buy and when to migrate to a new programming framework or language.

Look at how each stock affects the flow

Stock is anything you can measure: staff wages, toner, office space, products sold, or electricity bills. Many of these stocks will affect the rate of flow of the other. If you’re unaware of the flow between stocks, it will be like trying to optimise code without profiling it.

  • Balance hardware costs against other costs. Bigger hardware costs more non-linearly. Cheaper hardware makes iteration times longer, and productivity increases non-linearly with iteration frequency.
  • Balance features against maintenance. In some industries, features take priority over bugs, but even when they do not, we must keep a balance. Without balance, features are slower to develop, and a contradiction occurs.

Look at how the sensors in the system affect the flows

When you look at profit first, you maximise productivity. When you look at costs, you maximise efficiency. Which makes the company grow? Which gains you an outstanding reputation? Which makes staff happier and is more inviting for top talent? What organisational patterns do you see reinforcing these response mechanisms?

Sometimes, a sense is poorly calibrated. When a team is not performing well, it sets a standard. Poor previous performance can lead to ‘it’s the best we can expect’. Now, we have a drift to low performance. In effect, standards are affected by history, and expectation is affected by standards. So, keep performance standards absolute, or let them rise with each success. Move to a drift towards higher performance. What patterns of behaviour in your community lead to a negative drift and which to a positive?

Dangers of metrics

The environment will always define what is the fittest. If the environment includes some perverse metrics, then the fittest will be equally perverse. Societies valuing wealth and fame create beings with interest and skill in acquiring both. A workplace showing appreciation for people making a visible effort to do more work and praising people who push harder to get through over-commitments will create an overtime culture, not a culture of productivity. People willing to commit to working longer hold each other accountable to do the same. What patterns can you introduce to reject detrimental metrics?

Rewarding the wrong thing can be dangerous. A health professional who does not maintain the right amount of downtime becomes sloppy at work due to overwork or too little rest and causes more damage than is repaired by the work they do. Working hundreds of hours a week as a psychologist, judge, or doctor is irresponsible. Working longer hours to do more work makes little sense for air traffic controllers, video surveillance technicians, or any other job where a lack of concentration on the task at hand can cause cascading failure. But programming doesn’t require concentration, so 60-hour weeks are fine for software development, right? What pattern of forces and context led us to this state?

Reinforcement and re-evaluation

Organisations are systems. They drive what we do, what we can do, and who we are when we are part of them. When introducing a new rule into a system, we need to evaluate whether it makes things better. That much is obvious. But we also need to assess whether it makes ‘making things better’ easier or harder in the future. Some flawed rules can be easy to spot. A good rule that halts growth and adaptation is dangerous. Like evolutionary dead ends, they might be good right now but eventually cause problems. Pinhole camera eyes and employee timekeeping rules share this trait.

If your fire register turns into a timekeeping exercise and Dave notices John isn’t turning up on time, they may decide there needs to be a policy about it. If the policy is enabled, everyone knows management is watching them for failure to comply. Knowing your management is watching employee timekeeping leads to some people believing performance isn’t measured. Disengaged employees will be rewarded for arriving on time and won’t get the support they need to regain that sense of purpose. In contrast, those who care about the company and what they are working on may get chastised for being late or going home early, even if they’re producing 10 times the output of the ‘warm-body-on-a-seat’ worker.

Rules can and should be replaced. Make sure your rules are set up so they can be re-evaluated. Guarantee the criteria against which they are evaluated can also be re-evaluated and changed. Without adaptation, there can only be revolution or annihilation.

Understand feedback

Feedback comes in many forms, but a lack of feedback is the most dangerous. Some people just want to get on with their work. But ‘get on with their work’ is anti-team. Checking for null and returning to get back to work hides a problem and smothers the feedback. Seeing a garbage value in some data and simply clearing it out without checking is just as bad. Did they not ask why there was garbage? Were they even sure it was garbage? One person’s garbage can be another person’s hidden side channel for unsupported data. Do not shy away from errors by hiding them behind validity checks, as bugs will come and triple any pain you feel. If feedback is essential, then a null check is a design error. Numbness kills.

An error is a poor friend but a great teacher. So long as you recognise that errors are lessons and not punishment, you will naturally get better. It is feedback you can act on. If you cannot see your error, you will not learn. If you do not see the error as a lesson, you will not learn.

There’s a good pattern for error codes. State the type of error so that it shows which way to proceed. Keep the root cause and message’s source away from the error code and expose that extended data through a different mechanism. Some systems use error codes with message objects, such as exceptions. Others use a numeric code and a global reason-string-getter. The solution is up to you. The pattern is only the wisdom to return a code that indicates the direction to success for the user and raise it as soon as possible.

However, errors mean different things for users and developers. Developers need details and information about the right way to handle an exceptional situation. Users need indicators to help them get back on track. A developer needs a relevant documentation page and error code. A user needs visual feedback, live checks, on-screen rules, an indication of what was wrong (think about password entry boxes), and the ability to recover without rework (think about partial undo mechanisms).

In case it’s not clear, a programming language has developers as users, so the language should provide live, clear, visual feedback on screen that the language syntax or structure is not currently correct. Many people love the Rust programming language because it treats developers as users.

Some thank you for what you do. Some blame you for what you fail to do. Many will thank you for fixing what went wrong. Few give thanks for preventing problems in the first place. Some will blame you for not achieving the impossible. Some will praise you like a hero for doing what must be done. There are many asymmetries like these.

Make your application’s document states visible and provide relevant feedback on future actions. As advertised by Brett Victor1, visualise the present and future state. Give the user predictive powers and perfect memory.

The ‘Ship early and ship often’ pattern can be applied to internal projects. Stakeholders can provide feedback quicker (less waste and more confidence). Only build up small sets of changes and try to commit them in small packets. Choose between feature flags and feature branches but have a good reason to choose the latter. If you’re trying to change how you work, making changes and evaluating frequently is essential.

Journalling is a great pattern. Writing reports makes you smart. Saying what you did helps you learn from your mistakes and allows others to help you by providing feedback on things you’ve misunderstood.

The same thing can have different feedback if presented in different ways or at different times. What an observer can grasp, they will evaluate. For example, try to avoid adding visual style to a mock for layout or function. Otherwise, that will be evaluated, and you will get feedback on the wrong aspect. There is a hierarchy in how we assess things. Anything that is considered aesthetic will take centre stage during evaluation. This is why there are page formatting and layout rules for submitting screenplays; the approach is an equaliser.

Fixing anti-patterns

Anti-patterns are the same as patterns in that they self-form, indicating their stability. They are negative feedback loops in equilibrium. That means they push back when you try to fix them.

When a corrective measure doesn’t fix an anti-pattern, it’s usually because of policy resistance. The effort doesn’t fix the problem but only affects the flow in and out of the system. Real-world examples include opening a window when the room is hot, but the heating kicks up a gear because you didn’t change the thermostat.

The idea is simple. If you know what is wrong but attempt to address the problem directly, the system will maintain the state despite your interference by adjusting different flows elsewhere. For example, when governments introduce measures to reduce the number of fishing boats to help replenish fishing stocks, what is to stop the boats from getting bigger or the trawlers from fishing longer? Another example would be spaghetti code. Attempts to fix it make things better, but things revert quickly without addressing the culture around the original code. A quick change to get a feature complete or a bug resolved will likely require some additional unwanted pasta to get the job done. This regression wears out the programmers who tried to undo the damage—ultimately bringing the anti-pattern back into equilibrium.

To stop unacceptable behaviour, you must understand and address the system, not the symptom. If you don’t like a particular behaviour but don’t understand what stimulates it, you leave the original trigger open to find an alternative source of satisfaction.

An example I heard while listening to the radio one day was about drug testing in prison. Bringing in a test for drug usage caused the prison population to move to a harder variety. They explained2 that harder drugs were less detectable by the tests. Those in charge did not address the tendency of the prison population to use controlled substances in the first place.

The basis of any approach must be to avoid fighting the system and instead coax, coerce, and convince it to follow a new path. When you fight a system, the same mechanisms that built and stabilised it activate and defend the status quo. These constructive, repairing forces are usually more effective than any individual pushing against the system.

If you say you believe test-driven development (TDD) is good but find yourself writing code without writing the test first, analyse the thought process behind your actions. What triggered writing without a test? Did you want feedback from the result of the code faster? Or did you feel the test would be hard to write? If it’s the former, perhaps you’re in an exploratory stage and don’t believe TDD applies there. If it’s the latter, the problem may be that the design needs refactoring so that writing tests becomes easier.

Remember, systems protect themselves by being invisible, so detection requires good metrics. Observe actions, analyse triggers, and actively decide what and how to change. I have found that journalling helps me reflect on triggers and how I responded. You might find coaching sessions or other forms of guided self-feedback help you discover what systems of feedback limit you.

Systems work whether we understand them or not

It takes great character to admit we don’t have direct control. Rather, we must accept we need to understand the systems controlling us before we can appease them. Systems don’t care if you aren’t aware of the flow of logic behind them and how forces propagate; they will affect you regardless. Many claim evolution doesn’t work because they don’t understand how it could work. Unfortunately for them, the evolutionary process doesn’t care whether or not you comprehend it. Natural selection will carry on adapting organisms to the environment either way.

Additionally, none of the systems we inhabit can be correct overall. Some aspects of each system are good for some inhabitants, but pleasure for some is invariably pain for others in any zero-sum system. If you want to make a new system work well, you cannot suggest or attempt to apply a change that makes it worse for those currently in power without a revolution. You must migrate slowly towards it via changes that have no apparent direct negative effect on those in power, so they can’t or won’t fight back against the change. You change a river’s course by digging a better route, not by building a wall.

2

The qualitative interviews with prison staff indicated a widespread belief that RMDT had caused some prisoners to move from using class B drugs to class A because of the shorter period of detectability of the latter. They described anecdotal evidence of being told by prisoners that this is the case. - from http://www.dldocs.stir.ac.uk/documents/rdsolr0305.pdf page 98 onwards

1

I think I first saw Bret Victor in Inventing on Principle https://www.youtube.com/watch?v=PUv66718DII and became a regular visitor to the worrydream website http://worrydream.com/. Much of his work circles around these concepts of better feedback.

Lessons from design patterns

We can learn a lot from the history of design patterns. The works of Christopher Alexander contain many lessons, and the history of design patterns in software should be heeded for fear of repeating similar mistakes with any new technology. So, in this section, I would like to remind you of or bring your attention to some lessons we could learn.

Use patterns at the architectural level

We know software architecture is not easy. Let’s push to recognise that. Wherever you sit in the hierarchy, remind people that architecture can be wrong and fixing it is an option. Change is routine, and refactoring to patterns is expected at all levels.

Remind everyone that architecture is the idiom of the whole. It is only by accident, consensus, or dictate that any software is built the way it is. Software architecture and team communication modes mirror each other, and most of the development cost comes from communication overhead and delayed decision-making.

Mock-ups and prototypes are essential

An application needs a mock-up stage. This isn’t up for debate if you want to develop good software. You can create something small or one-off without a prototype, but anything built to last for a while and of sufficient complexity needs an exploration phase.

Developers use wireframes to create examples of application UIs. These examples allow developers to review UX ahead of prototype development. Software developers couldn’t use these approaches in the past, but recently, there has been an increase in software support for making high-quality, even interactive, mock-ups. The quality of UIs has grown in response. This is proof enough for me that UI is edging closer to a solvable problem, even if UX is still far from it.

Beyond UI, people develop in quicker languages before re-writing or migrating the poorly performing parts. This is how we end up with Python-based prototypes and C++ or C# final versions of applications. Python allows for a cardboard cut-out version of the final app.

When you omit the mock-up stage, you miss out on making big decisions early. As mentioned before, when you make decisions late, you make everything late. Independent programmers often get caught up in making the software functional first, ignoring the design stage, more so now because we often misinterpret the Agile Manifesto[AM01] as suggesting we should refrain from any upfront design. What is the cost of doing this, though? What are the real problems with building first, then deciding how it should be modified?

We’ve moved away from the ‘big-design-up-front’ approach and embraced a more hands-on set of processes. We iterate on our designs until an acceptable final form is found. This works exceedingly well when the target audience is well-known or well-understood by the developers controlling the creation process. When the customer is less well understood, a product owner stands in for understanding the values of the end user. But this leads to developers having ideas of what will make a good system without solid evidence, either to the affirmative or the contrary.

Guesses can be wrong. This is obvious. But what might not be so obvious is how often they can be right in an information-rich environment. What might be ambiguous or unknowable early in development becomes obvious and known much later. We know there are times when developers can be right and be sure they are right. But if these only come at the expense of working through the project from start to finish, then we are living in a horrible fated world of pain and suffering for every developer. However, we know this is untrue. Well, apart from those who choose to write in C++. The decisions we must make before the end of the project can be made much nearer the beginning when we actively engage in mock-ups.

Christopher Alexander almost always produced large, 1:1 scale mock-ups of elements he intended to bring into the world to have a better chance to make decisions as early and accurately as possible1. He could be found making two or three-storey cardboard cut-out structures to make sure the colour and shape were suitable for a situation. I imagine many architects smirking over the idea of building a prototype of such scale and convincing themselves Alexander was not a capable enough visualiser to design without these crutches. But none of them ever made anything as cohesive and wholesome as he produced on a regular basis. His buildings all have a settled air of suitability to the environment and the users of the building. Many structures by other architects try to accommodate one or the other in favour of getting a building put up. But that’s the same approach as the developer who works on the functional side first and the UX second.

If Christopher Alexander could make and use massive cardboard cut-outs to allow him to make his decisions earlier and produce much more wholesome constructions, then why don’t others do it too? Part of the answer lies in misunderstanding the purpose of prototypes and mock-ups.

Many developers, both structural and software, think of prototypes as static examples of the final form. They are not. They are not pictures of a goal; they are stand-ins for the goal that we can test in much the same way as the final piece. These stand-ins introduce many more parts of the final context needed to make the right decisions. An image does not help you identify where the sun will shine at each part of the day, which rooms can see other rooms, or whether you can hear someone three doors away. An image of a device will not tell you whether an item seems small when held in the hand or feels right when it reacts to touch or button presses. The best example of this for computing is the story of the wooden2 Palm Pilot prototypes.

The other misunderstanding about mock-ups’ purpose is their capacity to raise awareness of relational issues or collaboration in spaces. Mock-ups include the possibility of validating interactions.

Without being brought together, individual mock-ups show the edges between elements but cannot show interactions. They help show where styles and interactions might occur but don’t reveal system-wide interactions.

For example, a 1:10th scale model of a building will expose more than a map because it will show how different elements work in three-dimensional space. It might show how high the windows are compared to the surrounding landscape, allowing you to judge the view somewhat. The model could be used to deduce whether the building would cast a large shadow across an area. The shadow of a roof reduces the loveliness of a conservatory. Using a scale model, we can avoid the mistake and find a new location before the first brick arrives on site.

Things like inclines become easier to estimate, and shapes are easier to scale up rather than visualise from a plan view. Without being aware of the slope, you may not know to ask the developer to include steps in the path design, necessitating outdoor lighting for fear of tripping. The chain of effects of every realisation can be long, so it’s always worth mocking up early to bring these considerations to the fore.

But when you don’t mock-up, you leave all these realisations until later. The need for four button presses before hearing the track you want reduces the immediacy of your music app. But you only feel the impact once you build it. Bringing the product image up in a modal window might have been a good idea, or opening the gallery on a new screen might have been better. Without a mock-up, you complete the work before your decision can be an informed one. Like in building construction, a mock-up will help tease out the questions and worries before they become problematic.

In software built to present a fictitious world, the content designers create a world where users navigate the space. The space itself is part of the design, much like the guest flow design of theme parks and paths through some furniture stores. To be sure to get this spatial design right, the designers could build the current plan and evaluate it before tearing it down and trying a few more arrangements. They could, but they don’t. They mock, grey-box, and arrange on a model or test with pieces of paper on a table. Whatever they do, they do it quickly and without involving software developers, gardeners, carpenters, or a furniture arranging crew.

When you don’t use mock-ups in your development, you build something that would match your first mock-up. In effect, you are making a very expensive mock-up. You will want to change it but will feel invested. It will feel difficult to say anything. If you don’t say anything, a competitor will have an easy time leap-frogging your attempt. They will be wiser from your mistakes.

Don’t omit the mock-up stage. Prototypes aren’t an extra expense. Mock-ups are not optional. The only decision you can make is how expensive you let them be. If you don’t decide, they tend to use up your whole production budget.

Mock everything. Mock software and sites. Mock books and workspaces. Mock organisations and teams and customers. Every decision is easier to make when you have the power to visualise the structure about which you are making the decision. Every decision is easier when you can think clearly about the relationships between things in their final home.

Patterns are an emergent property of something more fundamental

We must also learn to continue the research. Christopher Alexander did not stop when he developed the process of simplifying complex construction in Notes on the Synthesis of Form. Instead, he went on to produce A Pattern Language. From there, he did not stand still either, instead developing an even deeper understanding of patterns and their origin by discovering the 15 fundamental properties of forms. The programming pattern movement stagnated in this respect, not moving on to finding fundamental properties of anything so far. We should continue our work.

For Christopher Alexander, the properties reflected repeatedly encountered instances of a class of action in space and time, like fractal recursions on itself. Iteration of action, not a specific action and not an outcome. Application of one of a selection of transformations, not the result.

The patterns in A Pattern Language were named. In The Nature of Order, there were 15 types of unfolding in a context. If we return to the GoF book[GoF94] patterns, we find unfolding along the lines of nominalisation, aggregation, and wrapping. The contexts are ‘needed variation’ or ‘unification of the disparate’, but what else? Storage? Robustness? Remoteness? Verification?

So, we need to look at the patterns and pattern languages we have in software development, find the places to make our incisions, the lines between the domains, as Christopher Alexander did with architecture and colour, and inspect the patterns in those domains to see if we can find any fundamental forms to help us define a more straightforward unfolding process in each of them.

1

In [NoO4-04] from page 287, he reveals how the West Dean Visitor Centre was constructed with attention to subtle details only visible when using large mock-ups. In much earlier work, he suggests using wooden constructions to be more easily adjusted until just right[TTWoB79], or in [TPoH85] in the ‘Postscript on Color’ where mock-ups were used to identify just the right amount of yellow and green to mix in to make the building colours perfect.

2

It was literally a piece of wood with a printed version of the UI pasted to the front. A quick web search will find the whole story and possibly change your mind.

Don’t accrete, unfold

Complexity is the main antagonist in your developer life story. Avoiding it should be one of your highest goals. Apart from embracing the ideals of simpler code, you must also remember the value of preventing unnecessary complexity introduced by integration.

The development of software is generally a case of adding to an existing codebase by introducing new code through alteration or integrating something new. The unfolding process asks us to take a sequence of steps, always maintaining a successful system at each step. This is at odds with integrating anything from outside.

The alternative strategy of adding complexity and trying to pay it down later only works if there is a later. Accretion leads, inevitably, to highly complex, difficult-to-maintain code. So, we should use unfolding sequences or admit we intend to increase complexity and are willing to pay the price. This is one interpretation of technical debt, pushing the cost into the future.

Growth through differentiation

But how can you develop software so that all steps are complete? Complete steps are when we apply a pattern of differentiation of the space or identify an issue and repair it into a better state. One way is to follow the advice of Mel Conway and make software a thing that is running and malleable all the while. This represents a similar situation to the way Christopher Alexander built his buildings. We take a look at where we are and see what needs to happen next. And then, we do it right there with our hands. Very agile. Working with suitable materials is a prerequisite. If the materials don’t allow you to make complete steps, the materials may be at fault, or it could be that you need to make different changes before you can make the necessary changes. We should invoke the wisdom of Kent Beck’s famous tweet, ‘for each desired change, make the change easy (warning: this may be hard), then make the easy change’. In other words, relentlessly refactor.

But what of integration? Integration isn’t a form of differentiation; it’s a form of including something from outside. Was this seen before in Christopher Alexander’s work? Perhaps. Including some outside construct to help solve a problem could be seen as using bricks or plumbing. The whole is concerned with the larger structure. Integration of utility methods and capability libraries is easy to resolve because while they are complicated materials, they are materials out of which we assemble the greater whole.

But then there are the more turbulent integration tasks, such as when a whole subsystem must be introduced to resolve a particularly gnarly problem. Then, it’s more than just complex material. It’s a joining of forces under a new whole. In this case, the only realised example I can think of is not a project by Christopher Alexander; it’s the Citibank building.

The building has a novel four-stilt construction because it had to share the land with a church, even though the bank was permitted to use the airspace above it. This peculiar contract first led to a single hollowed-out corner solution, but then the decision was made to hollow out all the corners, building the whole structure on stilts and a central column. But, even though the structure was sound according to all standard regulations, the novel design introduced non-standard problems. The stilts were in the middle of the faces of the building, so in effect, they created a footprint of a smaller building at 45 degrees, which could have led to catastrophic failure1. This is true of any codebase that has to contort itself to a new configuration to adapt to the incoming large change. Tests that previously sufficed to give strong indicators of correctness may no longer cover all the aspects of the system as a whole, and emergent and unwanted behaviour can manifest.

Is there a safe way to integrate large changes? Maybe not. However, being aware of the difference between integrating plumbing and integrating an entirely new way of handling water and gas is enough to raise the sensitivity to potential problems. We become aware that it’s not ‘what we always see’. Our tests breed a form of complacency. When introducing something larger, we must be alert to their inadequacy.

Stresses and tensions can guide our unfolding and differentiation, but each act should maintain wholeness. Just as massive integrations cause issues, disintegration can also cause problems. Dangling effects such as unused features or libraries are stressors, so they should be culled. If we keep them because we think we might need them later, then we should be aware that this is not strategic thinking but loss aversion.

What is unfolded?

Behaviour is unfolded. Structure can be unfolded too. But the materials themselves are not. They most often emerge from an unfolding, being required to perform it. We introduce new materials to differentiate or increase the quality of an existing element. You don’t unfold the design of a pipe, but you can unfold how the pipework connects to all the other pieces of the puzzle. And you can unfold the rooms’ design using that pipework pattern language. Unfolding is about taking something that is already complete and taking it to the next step. A staple, a nail, or a brick is not a construction. This is why they don’t unfold. They are what we use to complete the unfolding process.

Unfolding will always be of something that is already at least somewhat complete. Something whole and extendable based on a revelation of what is missing. Something that, even if utterly incapable of solving the problem it was intended to solve, is at least in a reasonable position to begin to solve it. A ‘Hello, world!’ program is whole with respect to almost all possible applications. From that small starting point, a direction is apparent, and unfolding can take us along the many steps to a batch processing application, a web service, an IDE, or even a computer game. Each step is small, but you can always get there from here. Each step is growth and adaptation.

Optimisations can also be unfolding operations. Consider deduplication of code increasing clarity. We can adjust a now obvious spatial problem to include a partitioning container. We can review our code for anything we deem inconvenient or sub-optimal at any step and make that next step one in which our actions are optimising space, time, clarity, safety, or security. Not every unfolding has to be towards a customer-visible goal.

What doesn’t count as a step?

Copy-pasting code into an existing program usually brings with it some external context. That context may bring unexpected complexity or resolution to problems of which you are unaware. This quiet resolution is not entirely healthy. You won’t know what to fear. The specific decisions made when writing the copied code are lost in their static nature. Copying code gives you a solution but not wisdom, and not always resolution.

In addition, not quite finished code is not a step. Doing some work and adding it to the codebase only makes it a step once it is live. If some intermediate code is important, then there are two options I can think of. Either you put it in a new branch as an incomplete step, or you use what you have to make a step, even if the change is not a direct improvement. It’s okay to have it make things the same. It’s just not okay to make things worse. And untested, unused code is worse.

Feature flags create a strange world where the unfinished lives beside the working code. Disabled code causes stresses in the working code by its presence but adds no direct value. I liken this to setting up a temporary kitchen in your living room with a plug-in stove, microwave, and kettle while refitting your regular kitchen. It’s a stressful time where nothing works quite as well as it should, but there’s hope of a brighter future.

Can the unfolding process resolve this? How do you do a bit of code surgery on a sizeable living codebase when you need to replace a whole system or way of working? It’s the problem of repair, not unfolding. Sometimes, repair feels different because something is in pain during the operation of restoration. Does this help us? Recognising a code change as a repair rather than typical development allows us to give it the space and respect it deserves. It does not help make it safe, but it does suggest we put up warning signs and hazard tape around the site of the wound and work. And that is what the feature flags do for us. By their presence, they indicate a thing in flux and inhibit accidental regression by compiling the new and old code in the same space.

So, repairs are not always steps of unfolding. Sometimes, they are surgery, and to do surgery, you often need a scalpel, and the patient will need time to heal.

Final form

The final form of an unfolding process is beautiful because it is an unfolded form, not because it is final. The consciously made decisions producing the form provide its beauty. The lack of debris from process accidents makes it beautiful through a sense of effortlessness and cleanliness. When the form is considered complete or in a state of satisfactory equilibrium, it is not beautiful because it is static. Often, we give up on what we could have had by striving to achieve strictly what we thought we could get. Unfolding guides us to what we could have so long as we keep our eyes open, often revealing a superior and beautiful final form.

1

Many of the details here are paraphrased from the Wikipedia article https://en.wikipedia.org/wiki/Citicorp_Center_engineering_crisis or from the ‘99% invisible’ podcast https://99percentinvisible.org/episode/structural-integrity/

Learn to name well

We must be careful from now on when naming patterns. Naming is hard. This is known. Naming patterns is much harder. This is why we should probably stop naming them and refer to them only by their context and forces or use sentence-like names as was used in Christopher Alexander’s last book[Battle12]. Using forces allows users to know where to look. Specifying the context filters the results and shows alternatives that help create a deeper understanding of the patterns.

For example, instead of referring to a Strategy pattern, refer to the context of existing code handling only one variant. Introduce the forces as a new requirement for the code to adjust behaviour based on other information. The GoF Strategy pattern is a solution when:

  • You want to offer variations on behaviour, but
  • You don’t want the decision-making process limited to options you can think of right now, e.g. you wish to allow plugins or users of your library to add new possibilities, and
  • You are using an object-oriented language, but
  • It does not have access to function pointers, or
  • You wish to serialise the choice of behaviour.

Then, a Strategy object works to solve the problem. Outside of the context of object-oriented languages or when you have function pointers, the larger pattern is solved by other means, so using the name pollutes the value of thinking about the larger pattern.

So, what is the larger pattern? It’s dynamic behaviour, which is a bit of a performance anti-pattern, but we’ll let that slide. Instead of demanding an object, we record the pattern as state-determined behaviour. Some languages don’t have a case/switch statement, some don’t have function pointers, and others don’t have objects; however, the core purpose and value of Strategy is achievable in all. We point out that the state needs to be stored somewhere and insist on taking care in concurrent contexts, providing the necessary wisdom to decide how to hold the state.

When naming an object-oriented version of a pattern, don’t be afraid to use a verb nominalised into a noun. The idea of many object-oriented practices is to create objects that represent actions. A place in which an action takes place can be named after it. 112 Entrance Transition, Boot Sequence, and Cyclic Update are all valid. So, feel free to rename Strategy as Last-minute Decision-Making.

Context rules

Whenever possible, introduce an element of the context over just describing a pattern. Sometimes, the context is well known, and so is the solution, so writing about it is about making sure related things are linked together. However, in cases where the contexts are less well-known and solutions are scarce, the context needs a vibrant presence in the pattern. You should represent the lack to be satisfied in the name. Dark rooms or ones in which the light is not good are referenced by the pattern 159 Light on two sides of every room, although indirectly. However, patterns without even an indirect name to an unexpected context will be missed, and only those who know the pattern very well before they come across the context will be able to use it. Examples of names that defy their context include most of those found in the GoF book.

If things are different, refer to them by different names. A crash is a hang, a transaction failure, or some unexpected behaviour. Which one? A user error might be a program error, fault, bug, invalid input, misuse, mistake, or stumble. What actually happened is the cause, the problem that we want to use to name our pattern.

You can only make decisions about things of which you are aware. Some decisions can only be made by creating a new label for a new configuration. As labels often lead to a meaning or essential nature, sometimes the decision can come down to your chosen name or attributes belonging to the new label.

Ultimately, all patterns need names. Otherwise, we cannot communicate with them as a language. But they must not be trivial. Small API, deep classes is better than Thin Classes. All work is not done until checked by the customer is more explicit than Customer value. Names can help when they state something that is not obvious. For example, repeatedly saying, ‘The customer comes first’ will not be as loud and clear as ‘If the customer wants your shoes, you must ask them whether they want them gift wrapped’.

So, name your patterns after elements that stand out, but make them discoverable by ensuring the problem is referenced, not a solution.

Learn to write well

This point comes up again and again in different guises. The value of a pattern is how well it communicates how to solve a problem in a context—how you resolve the forces. Well, there are the forces of wanting to resolve problems for others by suggesting patterns to guide them. To resolve these forces, we need to have a solution for how to best present patterns, which means growing as communicators.

Some forms of writing education are practical. Spelling and grammar are essential to not trip up the reader while they are reading. But beyond the kinds of errors a built-in spell-checker can find, there’s also the realm of automated style support. I’ve used a few of these tools over the years to help guide my writing, but I keep falling back to older techniques, such as reading the text aloud and using printed proofs. Whatever your solution, we know there won’t be one solution that suits everyone. Otherwise, there would be fewer books on how to write.

There are a huge number of books on how to write1, and not all of them are on spelling and grammar. Some of my favourites that apply to pattern writing concentrate on style and content rather than polish. You might be surprised to learn that many of these books are on how to write better stories. The reason for this is my recurring observation that patterns are more effective when framed as stories.

Patterns have a setup, a problem, and an inciting incident. They have a journey through a sequence of steps. There is a goal at the end, possibly unknowable at the start. This is a story. When you have an anecdote from a problem you solved, it’s a story. A pattern is a parable of how people have resolved a situation in the past. It’s a lesson on how you might use this wisdom to save you from the same pain in the future.

There are books on how to do this. There are videos online on how to write better. Takashi Iba even collected a pattern language on presenting, covering many of these story-writing aspects in Presentation Patterns: A Pattern Language for Creative Presentations[PresentationP14]. Storytelling, presenting, and writing for a film or play are all connected methods to transfer a story to another person. They all share features.

I recommend learning how to make stories that capture the audience’s attention. I would select Made to Stick[Stick07] to give you the groundwork for any anecdote-style delivery. The book contains many examples, and my copy has a permanent bookmark at the end in the reference section for when I feel the need to tighten up a work.

For selecting and structuring, any work on outlining will do, but two different works come to mind. The book Outlining Your Novel[Outlining11] is a way to work through a story, digging deeper and deeper into the plot. This can work well for larger works, but knowing about the technique can speed up a shorter piece’s development. The second work is a collection of courses by Shani Raja. His courses concentrating on how to write with clarity, evocativeness, and simplicity are good. But his editing course teaches you what you need to know to turn an outline and a collection of notes into a tight article or book. Unfortunately for you, I did not study his material until this book was in the first draft state.

Engaging the audience is the key. One of the first things I watched on this subject was by Larry McEnerny2. He believed no one came out of school knowing how to write. They needed to learn what writing was really for. He claimed students had been taught to write to prove they knew, not to convince, surprise, engage, or educate. This skill, which is vital in later life, was missing from the regular curricula, at least in science, technology, engineering, and maths education.

His recorded lectures on writing opened my eyes to the problem I faced. I had not considered my readers. I did not know why they should care. I had been so intent on writing out what was in my head that I had forgotten, no, never even considered, who I was writing for. As I learned this, internalised it, and began to practice writing with intent, I started to see problems with the books I read. Notably, most patterns I read while researching were written to explain a solution. They wrote about how someone solved a problem but did not describe why it mattered and to whom it might matter in the future.

Richard Gabriel was aware of this problem. He often wrote on the subject, even publishing the Writing Broadside[Broadside]. It’s an element of software development not often brought up, but the more we think about software as being a team effort, the more obvious it becomes that good technical writing is essential to the quality of long-term or large-scale projects. At that point, it should be considered a critical, even core skill of a software developer.

1

I have half a shelf of books on writing, and a wishlist of more. I dislike how I’m calling myself out in this section.

2

Larry McEnerney was the program director of the University of Chicago’s Writing Program. A few of his talks were captured on video. Some are available via a short web search due to his unique name.

Elicit the deeper needs

Learn how to listen to the client better. Hold silence open for them to reveal what they need from you. Instead of doing what the client wants, you must think about the quality without a name and the process of removing ego from the task of discovering requirements.

Every project will start with a vision of a final product. The vision is a finished form, but the form as guessed by someone who has yet to take the first steps to discover the root of their problem. If the most crucial part of software development is understanding the problem, then make that the reality of your processes.

Your client is an expert in knowing when you have not satisfied them, but they are not the architect or the builder. They don’t know what they don’t know, and they don’t know what you don’t know, either. They will not ask for things they need that are obvious to them. They will also not ask for anything that seems juvenile or unofficial.

Remember the Eishin Campus project—the need for rain on faces and a contemplative walk by the water. In software, the need can be as simple as not requiring a login to first use the application, or the need may be elements of UX, such as seeing the effects of your actions before you take them. Options for theming, such as offering a light mode and a dark mode, may seem frivolous or optional to many, but for some, they can make the difference between a readable application and one that is barely usable.

And always remember the XY problem. Remember the ‘faster horse’ quote and the person stuck trying to figure out how to make their database transactions faster rather than reducing the number of pointless transactions. We can resolve many of these problems by asking ‘why’ a few more times. Other times, even an interviewer can get a little stuck on a solution and will need to step back. In each case, one round of questions is never enough.

You can only learn answers to the questions you can ask. You can only ask questions when you know you don’t know the answer. You can only know there is a question when you can detect ambiguity or gaps. It’s learning to see these gaps that makes you a software engineer, not your ability to write code to solve an already well-defined problem.

So, go now and spend more time reading about bugs than successfully deployed features. Bugs are unexpected behaviour, so figure out why it was not expected. Gorge yourself on failures and grow on a diet of mistakes.

Pattern Reference

This section is best used as a reference, not read end to end. I have summarised the GoF book[GoF94] patterns in my own words. I have also provided some commentary on where I think they fall down or could be improved. But I only attempt a light summary here unless there is a particularly prevalent problem with the pattern.

In addition, I have summarised some of the patterns from A Pattern Language[APL77], and described other software patterns referenced in this book.

Summaries for the patterns in the GoF book

This reference is a personal summary. My interpretation of the patterns could be wrong or missing some critical nuance, making the criticisms invalid. The original source is the GoF book[GoF94], Design Patterns: Elements of Reusable Object-Oriented Software. In some cases, I have attempted to critique a pattern and added notes addressing my concerns. All these summaries and notes are my interpretations and opinions.

The order of presentation is different to that of the GoF book. I want every pattern to make sense to the reader as they read the reference from start to end. This means the list is not in the original or alphabetical order. The list is in a loose order of interdependency. With this ordering, in almost all cases, I have been able to describe a pattern without referencing a pattern not yet defined. The exception is the closely coupled pair of patterns Command and Memento. For those, I put them right next to each other. Otherwise, the ordering is somewhat arbitrary.

  1. Factory Method
  2. Singleton
  3. Builder
  4. Prototype
  5. Strategy
  6. Template Method
  7. Abstract Factory
  8. Composite
  9. Adapter
  10. Bridge
  11. Decorator
  12. Façade
  13. Flyweight
  14. Proxy
  15. Chain of Responsibility
  16. Command
  17. Memento
  18. Interpreter
  19. Visitor
  20. Iterator
  21. Observer
  22. Mediator
  23. State

Factory Method

AKA: Virtual Constructor

In many languages, creating an object requires you to know what that object will be. However, sometimes you only know the base type to be constructed and when to construct, not the specifics.

For example, you might wish to open a file by URL. The prefix or path might indicate a specific type, but the caller only knows it must be a file handle. A generic solution can map regular expressions or other types of matcher to a key-value store of factory methods where each creates a specific kind of object to handle the file open request. Consider a URL for a local file, a networked file that might need buffering, a memory-backed file that might need emulating from within the program, or a pipe that opens a stream to another process.

A file handle can manage all those examples, but the object behind the handle is different. We must create the object based on some other information. In some systems, it’s decided upon just in time as per the file opening via URL. Other times, we handle it as part of the configuration, such as when you configure the kind of UI used for an error or request for user input (such as with CreateFileDialogue) or when the creation is relative to the subject of the created object as per creating editing tools directly from the objects to be edited.

In effect, Factory Method is the nominalisation of construction specifics when you don’t have a way to reference a constructor in the same way we can store a reference to functions. The constructing object won’t need to know which concrete type it must construct and can even build objects added later by dynamically loaded code, e.g. by plugins.

“I make benches. What type of bench? That doesn’t matter. I was instructed by someone else on what style to produce. You just tell me how many they should seat and where you need them.”

Singleton

When you only need one copy of an object, such as a manager for a type or resource access point, it can reveal new requirements. Two requirements typically come out of a demand for uniqueness.

You may want to guarantee there’s never more than one for safety or security reasons. Or it may be because it makes no sense to have more than one. Uniqueness guarantees are easier to make if you use strong countermeasures.

If there’s only one, then there’s the question of how to access the only copy. How do you ensure no other part of the code accidentally creates an additional instance because it could not find the proper one?

The GoF solution is to make construction the job of the class, not an object. The suggestion is that other local instances can be created when the type itself doesn’t claim responsibility for maintaining uniqueness, even with a single global pointer to store the instance. Notice that in some languages, class objects (as opposed to object instances) are Singletons, but not all.

There are three problems which keep occurring with Singletons.

  1. Non-deterministic construction.
  2. Lack of destruction.
  3. Global shared state.

Non-deterministic construction has a two-fold effect. Something is going to be the first thing to fetch the instance. That will cause a spike in the processing at that point, but undeservedly so. My background is computer games and finding that a Singleton caused a one-off performance drop in the game when we first used it was an annoying problem, but at least I had a reasonable workaround. You commit a pointless fetch of the Singleton before you need it and then any workload is moved to that point in the program at the latest.

The second timing-related issue is one of non-deterministic creation order. If a system constructs on first use and use is based on something non-deterministic, your program will create Singletons in different orders in different executions.

Many things are non-deterministic in reality. If the object is only accessed based on data loaded and the application is multi-threaded or the system is referenced based on user input or even their reaction time, then consider the system to be created randomly.

Dynamic or lazy creation has bitten me several times, and again, it’s easy to fix by explicitly calling the getInstance of each Singleton in a known order early in the program before anything non-deterministic happens. But if both these fixes are necessary, why create on first access at all?

I prefer explicitly creating each object during the early phase and placing it in a globally accessible location.

The second main problem—the lack of destruction—presents additional issues. Usually, Singletons will not be guaranteed to destruct. Immortal objects can lead to difficulties in finding resource leaks. You need to keep track of which resources are held by Singletons and ignore them or find a way to ask them to release their grasp. But if you need to ask them to release, you may as well destroy them.

Destruction is also my preference. I like to delete all Singleton objects at the end of main in C++. This way, memory and file system managers can warn if their resources are not fully released.

The third and last main problem with Singletons is their state. A singleton is defined as a pattern of a unique object. Implicitly, some internal state could be allowed. It’s also described as having a way to access the object. In essence, the object is global because global refers to access—a global shared state.

A stateful singleton seems as dangerous as any other global variable. But without state, the Singleton could be a collection of free functions. So, realistically, the only time a Singleton is necessary is when it’s dangerous or when you really want something to look like an object.

“I am unique. No others like me. I’m a self-made object. You can find me by name alone.”

Builder

There are two flavours of Builder pattern as far as I am concerned. They are different as they present themselves in different problem contexts and solve different forces even though they look very similar from the outside. The first is the Stream Builder.

When you have data or an event stream used to construct a complicated object, you may desire different final objects based on the same stream of events. The example provided in the GoF book had builders consuming events produced when reading an RTF file. ASCII, LaTeX, and GUI Widget builders handled the text and formatting events; each builder dealt with the events in their own way.

A Stream Builder does not have to construct a complicated object. The GoF example converts from RTF to ASCII. You can use a Stream Builder for structured translations. Converting from YAML to JSON or vice-versa makes just as much sense. The core principle is that a concrete Stream Builder should consume events. The last event is often a finalisation step (GetProduct in one example), which seals the translation. If you require some last-minute bookkeeping, you can use this last event to trigger that work.

In effect, a Stream Builder is a way to take on the temporal coupling of events and data and convert it into another set of events and data. Under this interpretation, the disjoint temporal coupling requirements of the two different forms might necessitate a Stream Builder to maintain some considerable intermediate state before closing. The builder may need to hold instructions and data for later reference, to be sorted, or to be produced in a different order when translated into the new structure.

We often encounter conflicts when selecting the granularity of calls for a Stream Builder. What is too fine for one class might be too coarse for another. Another issue which can recur is how to refer to the results of previous actions in the build sequence.

That was the first type of builder. The other type of builder is the type which can ignore temporal coupling, and instead, all events are about pre-configuring before construction. The Expert Builder handles this other case where it’s not about interpreting events but directing construction.

There is almost an example of the Expert Builder in the GoF book with the MazeBuilder. The point of the Expert Builder is that it solves construction problems for you. It can return errors when it detects irreconcilable conflicts and may attempt to resolve constraints given the specifications provided by the director’s events. You generally find this type of builder supporting GUI-driven directors or scripting systems. The difference with this builder is that it commonly has a verification method to determine whether a configuration is complete and valid.

What’s shared by both builders is the concept of a director. There will be something providing events. Whether ordered or not, they must come from somewhere, so the director should be considered part of the pattern definition.

Both types of abstract Builder define the events which can be fed to the concrete builders, so they need updating when a new, more complex director arrives, but not when a new concrete Builder appears.

I suspect there is only one builder type in the GoF book because of the similarity of the final object diagrams. You can have an Expert Stream Builder without any conflict. Without a problem-centred approach to documenting patterns, patterns with neatly overlapping solutions will always suffer from a narrowing. Directors for both types of builders can be anything which emits events. They could be structured file readers, scripts, or functions. The builder must be able to accept these events and optionally handle a finalising step.

You can implement a builder with a dictionary or two—one for the methods and one for the current state.

“I build houses. Let me know how you want it constructed. How many bedrooms? How many floors? Do you need a driveway or garden?”

Prototype

So, you’ve got a complicated object to build and you’ve been using an Expert Builder to make it, but now you realise you need multiple copies of these objects. You may no longer have access to the builder after construction. It happens. The builder may not be code you control. Either way, you want lots of these complicated objects and using the builder whenever you need a new one adds overhead.

The other time this becomes necessary is when you are working inside a document and hoping to copy-paste a dynamically constructed object into another position. In this case, you wish to clone a live editable object.

Instead of building each new object, you can create by cloning. Take the first object as a Prototype and keep it pristine. Use it only as a template to generate more when needed.

You can also use Prototypes when the class to construct needs to change at runtime. Furthermore, if the instances share read-only data, cloning can reduce the cost because it’s often quicker to identify and reuse read-only references rather than having different mechanisms for constructing different parts of objects. The clone operation only needs to duplicate the mutable state.

Cloning is applicable even when a deep copy is required if loading data or calculating values for construction are expensive or greater than the cost to clone.

In languages without reflection, the base class will need to expose a clone method. You must override it for any necessary exceptional cases; otherwise, it will do what it says and return a unique copy of the object.

Recurring issues with prototypes generally revolve around identity. Usually, objects are created, live their life, and then die. But with a prototype, some actions will have happened on the original object, which have now also occurred in the past for the clone. They share a history before the point of cloning. Cloning mistakes can lead to shared unique IDs. Objects can become subscribed to more than one publisher when you clone the publisher or a clone might think they are subscribed when they are not. Anything where the object’s identity is relevant becomes a potential source of error.

“So, you want more of these? Okay, I shall make usable copies for you.”

Strategy

AKA: Policy

When you need your program to behave a certain way in response to events, you usually use configuration to set the behaviour and branching in the code to act as specified. A spell checker might check words and fragments for spelling and grammar errors but needs to know which language to check against. So, you set a variable for which language and when it comes to spell checking, you load the correct dictionary and check the spelling.

However, when you add new languages, you must add new code switches. Not all languages have the same types of word sequences to verify. Sometimes, words are not separated by spaces but by other characters. Sometimes words might not be in a dictionary, but simple rules might guide regular constructions such as the hyphenated-sequence, or neologisms, such as malamanteau1.

Just storing a variable for which dictionary to load is no longer sufficient. You need to select a language-appropriate processor. With each language there is now a chance you need to add another case to your switch on languages for how to check spellings.

Instead of a switch statement, you can use a strategy or policy object to encapsulate the checking algorithm. You must define the API upfront but not much more. This way, the checking algorithm can change as per the switch but the checker no longer needs to know about the checking algorithm or even what checking algorithms are available at compile time.

To add a new language with new rules, create a new class which implements the checking algorithm. You can provide an object of that type to the checker as necessary. A plugin would, upon registration, create and supply a checking algorithm object (potentially as a stateless Singleton) to the spell-checker. The spell-checker can store these in a dictionary, as every implementation follows a standard interface.

In essence, Strategy suggests you use an object to define behaviour. The object can perform an algorithm on the context passed to it, meaning a strategy operates on a context, not on that holding the strategy object.

Beware of performance problems caused by last-minute decision-making as the more objects you have behaving like a Strategy, the less predictable your program and the less work can be done ahead of time.

“You’ve told me what to do, but I’ll do it my way, thanks.”

1

I believe this joke word originated from XKCD https://xkcd.com/739, but the point is that there are rules for how neologisms come about, and why not spell-check them as if they already existed?

Template Method

If you have enough Strategy objects, you might see repetition in their operational details. Some elements could repeat across multiple strategies, and some structures seem like boilerplate. I shall give examples of the issues through a Restaurant Meal class.

Many meals can have the same dessert, starter, or even main:

  • Salad_Steak_IceCream
  • Salad_ChickenPie_IceCream
  • PrawnCocktail_Risotto_Brownie
  • Toast_ChickenPie_Brownie
  • Toast_Steak_CheeseCake

Repeating elements in the requirements means repeating elements in the objects which are not satisfying to implement. However, the parts of the meals are not the only repeating thing.

Every strategy will empty the table before serving the next course. But clearing the table is not part of the specifics of any meal. Also, every meal has three courses. It could be a general structure to what every strategy does, and it looks like boilerplate code in every meal.

A dutiful programmer would then think to reduce the repetition. It’s error-prone, after all. But how can you remove repetition when it’s the when-and-with-what, not the how?

Introduce a Template Method to provide hooks to hang the details. By finding the expected pattern of operations, you can make some steps virtual and override them.

Template Example

Because the GoF book is about object-oriented designs, it does not mention how to construct a Template Method object without inheritance. You may use an array or dictionary of callbacks. In this case, rather than sub-classing the AbstractTemplateClass to implement the details of the steps, you can use Strategy or Command objects to define the specific behaviour. Use a Prototype to retain the dynamicism while increasing type-safety.

meal_dict = {
    "Starter": PrawnCocktail,
    "Main": Risotto,
    "Dessert":CheeseCake,
    }

def MealTime(meal_dict):
    PlaceCutlery()
    meal["Starter"]()
    Cleanup()
    meal["Main"]()
    Cleanup()
    meal["Dessert"]()
    Cleanup()

“We’re going to cook a roast. So, first, pre-heat whatever you’re going to cook it in. Second, season whatever meat you’re cooking …”

Abstract Factory

AKA: Kit

When finding you need to construct a different set of objects based on the same algorithm, data, or selection of actions, then we want to allow for variability in what we build. Using a Factory Method makes sense for a single object type, but when there are more objects, and they are all related, we need a way to connect the Factory Methods together so they remain coupled correctly.

An Abstract Factory is a base class defining related factory methods. The Abstract Factory can be passed as an object at runtime to allow other methods to control when and what specific classes are instanced.

Abstract Factory

It can be helpful in Builders and Template Methods or other nominalisation of action patterns. A regular find outside of C++ would be a dictionary of functions, lambdas, or prototypes. It depends on the idiom of object creation in the language and project. The keys usually follow a pre-defined naming convention. Abstract Factory can be considered a menu for ordering construction, or objects à la carte if you will.

As we only use the Abstract Factory to relate Factory Methods, it can be stateless and can be safely implemented as a Singleton. The only state for a Abstract Factory would be cross-cutting concerns such as logging or performance statistics.

Each concrete instance of an object-oriented Abstract Factory overrides the factory methods to implement construction.

Making the abstract factory extensible removes the ability to specify how the created components must relate. For example, using a string, enum, or unique-ID allows adding new constructibles at runtime but doesn’t limit what can be added and removes some forms of type safety.

The base Abstract Factory class need not be abstract. It can have concrete methods as defaults and even be a default abstract factory object with other concrete Abstract Factorys overriding as necessary.

“I build furniture of a certain style. Tell me what you want me to build. Chair, table, stool?”

Composite

You have objects consisting of many element objects. These container objects can themselves be contained as elements in their children. This hierarchy defines collections which should behave as elements. Tools in your program presently have to check whether an object is a container or is one of the value or leaf objects before acting upon them.

def recurse_apply(node, operation):
    if isinstance(node, Node):
        for child in node.children:
            recurse_apply(child, operation)
    else:
        operation(node)

Use the Composite pattern and define a generic interface providing the necessary operations for tools to complete their tasks with any object in the hierarchy. For example, a document is an element of type composite made of a page with many sub-components. There might be an image object, a text block object, a line or rectangle object, or a filled shape. All of these objects are leaf objects. When printing, they need to provide instructions to a printer, so offer a default empty Render method on a root class (called component) for all elements and override rendering for each relevant concrete object.

Composite

In computer game engines of the past, it was common to use the Composite pattern for a scene graph. Every object in the scene graph would be a component with a Render method and an Update method. In many engines, every node in the scene graph would potentially have children, and so every node was a composite whether or not it was a leaf. Every frame, the render and update calls would cascade down from the root1 doing what they needed to do.

Scene graphs have become something of an anti-pattern in game development now, with the hierarchy of objects in a scene graph seen as a performance problem. As the API calls run across a collection of unrelated instances it nearly guarantees the worst possible cache utilisation. In addition, loading and saving out these hierarchies is non-trivial. Rendering more than once per frame, which can apply to split-screen multi-player games and VR, can become complicated. Adding and removing objects from such a hierarchy often breaks optimisations. Introducing bullets, coins, smoke, and other momentary effects is expensive because they make structural adjustments in almost every frame. If you are culling your render using the hierarchy, it can cause all sorts of complications there as well.

GoF suggests:

  • Components should maintain pointers to their parents, meaning the structure is built into the type.
  • Use a GetComposite call to avoid dynamic casting for composite only methods.

I still have one more issue with the composite pattern. Even without the issues mentioned in the games-scene-graph section, there’s another problem. Maintaining membership in multiple hierarchies or graphs at the same time is tricky. Because parents and children are part of the objects as an invasive element, the hierarchy is something the composite is. You may want this for simplicity’s sake, but when you need representation in multiple structures, it becomes a huge chore to strip it out, or you end up handling it as additional complexity by having one intrusive hierarchy and further hierarchies as externalised structures.

For one example, consider the hierarchy of objects in a scene. Their relationships define where they are. We typically position a character in a vehicle relative to the vehicle. Forget about the human perception of the objects in the hierarchy for a moment. There’s also the hierarchy of meshes and materials. For optimal rendering, it used to be the case that you would batch up renders so expensive context switches were fewer and the more lightweight ones were allowed to happen more frequently. A good rendering hierarchy would be one where the most deleterious procedures were in the first layer, the next most time-consuming in the following layer, and so on. We would collect them into alternating clumps. We had to create an entirely different order from a typical hierarchy traversal, whether depth or breadth. We needed a second hierarchy, but usually, we just kept a list.

“I am one; I may be many, but I act as one all the same.”

1

Or roots if you separated out and had different scene graphs for UI or other environments to be rendered in a different way.

Adapter

AKA: Wrapper

When an object or class has an API that cannot or should not be changed, but you need it to have some significant change to adapt to a different use case, then you need an Adapter. You might have two libraries that need access to the same instance but don’t share a common interface. It might be a legacy object which needs a new feature to support a change while keeping old code untouched.

There are two types of Adapter. The first is a class adapter; they wrap the class so you can create new objects pre-adapted. The new objects work as drop-in replacements for the old object type. The second type is an object adapter; it acts as an intermediary, holding the adaptee and forwarding calls through to it while reinterpreting the API where necessary.

“Let me translate for you.”

Bridge

AKA: Handle/Body

In the presence of an abstraction of a set of classes and operations, such as when using an Abstract Factory to construct all the parts of your document, you can find many concrete versions of objects that share common features. Some of them do the same thing but with different concrete calls.

Perhaps your Abstract Factory allows you to choose the type of persistence layer you use. You select from simple file storage with JSON or a more complicated SQL-backed solution. If the objects produced by the factory are document elements, then each element may have a similar sequence of actions regardless of which concrete type they belong to.

The JSON-backed image object will:

  • Open the file store, reading the JSON as a queriable object,
  • Query it for the relevant data for the binary file path,
  • Then, pass the file path to an image loader.

The SQL-backed image object will:

  • Open a connection to the database,
  • Query it for the relevant data for the binary data location,
  • Then, pass the image data from a blob query into an image loader.

If we have a text section object, it will have some similar steps:

  • Open a connection to the database,
  • Query it for the relevant data for the text field,
  • Then, pass the formatting and content objects to the text builder.

Bridgeless

There can be many more abstractions and other possible storage solutions in a document. This becomes an N2 problem.

Instead of having the concrete objects do these steps, have a separate abstraction for the implementation details from the abstraction of the object using them.

The general abstract image object will:

  • Open a query object via the implementation abstraction interface
  • Query that object for the relevant data for the image loader call,
  • Then, pass the result to an image loader.

Bridged

So, we split the document element abstraction from parts of its implementation. We can use inheritance to refine new features and add new possibilities.

As another example, an ObjectInWorld can DrawSprite, but Character inheriting from ObjectInWorld will use DrawSprite many times to build up a representation. ObjectInWorld implements DrawSprite as m_render->DrawSprite, and m_render is an ObjectInWorldImp type.

Bridge allows platform or context-specific implementations of base class functions to be replaced by a variable to an object. The AbstractBase can extend its features, still calling into the primitive Implementation calls, allowing implementations to catch up when it’s worth it.

“We’re a designer-crafter team. The crafter is a carpenter; they work the wood. I’m the designer. I don’t work the wood, but I know how to direct the woodworker to create fabulous things.”

Decorator

AKA: Wrapper

You want to add a behaviour change to an object without changing the API or sub-classing. Perhaps it logs progress, but you want it also to ping you via email if the task has taken longer than ten minutes and looks like it’s less than 50% of the way through. But you only want to include this feature when needed. At this point, you want something suitably dynamic but not invasive to the existing class or object.

A Decorator adds behaviour before or after method calls by being of the same form as an object adapter, but instead of translating the API, it duplicates it. It hooks into the call and injects procedures where necessary to achieve the additional behaviour.

Decorator provides an inheritance-free form of overriding or introducing behaviour. If you can swap out the object, this allows for runtime mutability—the ability to add extra functions to an interface dynamically. Unlike class adapters, this also allows for “double” overriding. It’s like an inversion of a callback, so it pre-empts the original behaviour. It can pre or post do anything any number of times.

“I will unlock the doors before your work begins and lock them back up when you are done.”

Façade

When a subsystem consisting of a set of objects is more complicated than strictly necessary to get the job done, it can be cumbersome to use effectively. But removing unnecessary features might be impossible.

Under such circumstances, consider using a Façade object that implements only the interface’s essential elements. Limiting what is possible has a second positive effect: the hidden interface is no longer entirely public and can likely diverge and deprecate without as much impact.

The only difference between Façade and Adapter is that the adaptation is for the user, not the code. It is to provide a more straightforward API or an API for a low-knowledge use case. It’s the objectification of domain-expert knowledge. It can also be seen as a pattern-like version of interface segregation.

I’ve seen this pattern played out and referred to as helper-object or high-level-interface.

“Okay, calm down. Let me walk you through this.”

Flyweight

When you have Composites, you often have collections of similar objects. Perhaps they each have some unique state, such as their position on a page or their textual content. But sometimes, the thing that makes them unique is the specific selection of different regular elements. They may have a distinctive combination of font and colour, or perhaps the image and location are unusual. If each object is unique only in these ways, we have duplicated common information and wasted resources.

Instead of capturing the non-unique data per object, present a palette of options for Clients to call upon as necessary. The Flyweight pattern calls these options ConcreteFlyweight objects, which makes little sense to me as they are the heavier objects we try to avoid duplicating.

In graphical rendering, triangle mesh objects or sprites count as ConcreteFlyweight objects because you refer to them rather than holding a full mesh or sprite in each Client. In some documents, fonts, images such as logos, and even repeated composites can be shared in this manner—house planning software often uses a palette of doors, windows, and furniture items.

One way of thinking about the Flyweight is that instead of handing off how to do a thing to another object as you do with Strategy, you hand off being a thing instead. This is the nominalisation of attribute, but not identity. So, when using the ConcreteFlyweight, invocations will frequently require a context such that the ConcreteFlyweight can perform as if it were the fully-fledged instance when it acts on behalf of the Client.

The pattern is only practical when:

  • You have a collection of objects
  • Those objects have internal state
  • Those objects have costly attributes in common

Without many objects, there cannot be attributes in common. Without the state, there is no call for unique Client objects. If they aren’t unique, you could reference the ConcreteFlyweight directly.

Client objects are stateful, while the ConcreteFlyweight objects are stateless. You might also think of them as Prototypes you never clone.

If your composite does not require parents, you can use a ConcreteFlyweight as a leaf in your hierarchy, reducing costs.

“I have a house. Although, yes, I share it with quite a few others.”

Proxy

AKA: Surrogate

When object construction requires more resources than you can afford, systems slow down and become unresponsive. Systems avoid this using tactics such as lazy evaluation or loading-on-demand. But this means objects then have two jobs. They must be the thing they need to be but also the not-yet-realised versions of themselves.

Other times, an object might have methods requiring the acquisition of resources, but in hindsight, those specific methods are not necessary to achieve your goal. For example, if you want to know the size of an image, you don’t need to load the whole file, only the header. However, adding the capacity to partially load a file is a new feature not everyone wants.

Or perhaps what you need from the API is a caching mechanism. As you can see, the Proxy pattern handles quite a few problems.

A Proxy is an object which takes some responsibility away from the concrete object, the RealSubject. It acts as a stand-in, like an Adapter, but adjusts behaviour like a Decorator. Instead of modifying the API or adding features, it intercepts calls directed at the RealSubject and manages the workload and lifetime of the RealSubject.

Proxy

A Proxy can respond faster or more cheaply than the complete object it represents. You can tune it to the client’s needs rather than write for completeness or correctness. Examples include image placeholders, remote service proxies, pre-validating before beginning a transaction, quick estimates or rough versions of the expensive transformations. In image processing, for example, a Proxy could provide an instantaneous blocky estimate of a radial blur or a preview of resource-intense filters.

The GoF book mentions four types of proxy:

  • Remote proxy: a stand-in for some object in another address space, potentially even on a different host.
  • Virtual proxy: e.g. the lazy loading image object.
  • Protection proxy: adding access control to an object.
  • The smart reference is a proxy for objects like a smart pointer, potentially with resource reference for load on demand and for mutable state, handling locking of the object.

A Proxy is one of the best uses of the object-oriented paradigm. Swapping out a whole set of behaviours for another is possible in non-OO languages, but the simultaneous change of behaviour and data representation is smoother in OO. A virtual proxy removes the decision-making process for resource handling, decoupling the concern of when and how to handle the required system calls. In most other paradigms, this involves local adjustment to support the feature.

“The RealSubject is not available. How can I help you?”

Chain of Responsibility

You have a hierarchy of components, and they respond to events. One event comes in, but the element which receives it doesn’t know what to do with it. It can’t handle the event. In this instance, you push the event up to the parent and hope they can respond to the message.

At this point, you have created a Chain of Responsibility. A class overrides the handling of the message if it can, and the default implementation is to propagate further along the chain, usually upwards.

But what if an object along the path doesn’t handle the error and drops it without handing it to the next parent? I prefer the certainty of a method to call on objects that walk the chain for me. It demands all visited objects return a value stating whether they handled the message or not.

The pattern doesn’t seem OO specific, as interrupt handlers and chains for messages or events have been around for a long time in procedural code. The pattern is an example of how it could be implemented in OO code. For someone new to OO, this could be reason enough for its inclusion.

“Sorry, nothing to do with me. I’ll go ask my manager.”

Command

AKA: Action, Transaction

When you know there’s an action or sequence of actions to take but not what they are or even which objects they operate on; you need some way to store the target object of the activity, the method to call, and the parameters to apply. One way to represent this information is by a command object. Consider a command line interface: it includes the operation, its target or targets, and the necessary parameters. In effect, a shell script is a sequence of actions to take. But how do we represent that inside our application?

We can store references to our recipient objects and function pointers to methods to call as long as they all have the same signature, e.g. you could require they all take dictionaries of parameters. This can work in some systems for a while if the types match. But at some point, the recipient’s base class or the method’s parameters will deviate and then you won’t know how to store or call the function. You will then be unsure how to reference the recipient safely.

Instead of attempting to store the parts of the puzzle, introduce a Command object which abstracts the execution and allows the concrete command objects to handle how they refer to their recipient, how they call the required method, and how they hold the parameters to use.

Command

Adding undo can be achieved by adding an Unexecute method and reversing the operation. The GoF book suggests using a Memento for this purpose. However, sometimes things cannot be put back. Bills cannot be unpaid, and secret data cannot be undeleted. But even if some things cannot be undone, their impact can be somehow recognised and fixed.

There are some pitfalls with Command objects in that they do not always capture the sequence of events in a usable way if not designed very carefully. For example, if used in a do/undo system, commands that create objects can cause complications with commands that affect those objects. When creating, modifying, undoing twice, and then redoing twice, there’s sometimes a problem with the second redo—reapplying an action on a recreated object. The command to redo must operate on a recipient potentially destroyed as part of undoing the construction command. This non-trivial problem affects anything that creates mutable instances.

The fact these problems weren’t talked about in the GoF book makes me question whether the undo systems were implemented fully, as those bugs appear very quickly. Or perhaps construction commands never came up in their implementations.

My wisdom on this subject is to refer to objects by handles using unique IDs and never pointers. Creation commands will re-create a different object but can always return it to the document with the same unique ID.

This lack of prescience is a diagnostic point for me. We can condemn patterns when their descriptions lack any representation of common bugs. If most pattern implementations produce a set of regularly encountered sticking points and those issues are not covered in the pattern prose then the pattern is incomplete and needs updating. Or perhaps it was a hopeful pattern extrapolated from the mind rather than a direct experience.

“I will do it. What? To whom? That’s my job to know.”

Memento

AKA: token

So, you have a command, but what if that command is a narrowing function, like a modulo where multiple inputs map to the same output? You want to execute it but are concerned it might not be the right action. In effect, you wish to be reassured you can restore the object’s state at will, reverting any changes you have made.

You may have tried to serialise the object’s state but found that serialisation caused unwanted overhead. You may have wanted to do the reverse of the action taken but found there was no way to reverse it, as exemplified by the modulo.

If this is the case, you need a way to store and restore the internal state. The GoF book proposed the Memento pattern as a solution.

Before committing the action, request a Memento from the object, and only then make the call or adjustment you wish to make. Usually, it’s created and stored by a Command so it can return the object to its original state when you call the Unexecute method. A Command can pass the Memento back to the object, restoring its state.

Unfortunately, Memento was identified as something constructed from the object of the action, not from the action itself. Therefore, it loses the capacity to store only the minimal data necessary to return the object to its prior state. The GoF book mentions keeping an incremental state, but this seems post hoc, including changes brought about by previous actions, but not the state changing due to the action about to be taken. To Unexecute a Command, an object restoring from Memento must look at not just the current Memento but all the possible restoration points in history. This is possible, but it couples things from the wrong side of the event.

“You want I should be like I was before?”

Interpreter

When you need to convert a data source from one form into another, you need to interpret it. It can be as simple as turning a binary data sequence into a list of text lines by looking for new-line characters.

As the complexity of the analysis function grows and the number of ways the analysis can be configured increases, the likelihood you have accidentally created a report processing or query language increases.

Translation of such a data stream can start with a Stream Builder, but when the transformation needs to change, you will need a different builder. One is fine, and two are bearable, but you need a better tactic when they grow to three or more.

Instead of putting the whole filtering and action sequence into one method of one class, introduce the Interpreter pattern by making each transform a separate object in a Composite.

The GoF book describes the Interpreter pattern as defining a language, representing problems in the language, and then resolving them by interpreting those forms. It even starts with an example using Backus-Naur form to describe the classes in the system.

The wording is complicated, and the language does not match our current way of thinking. It’s possibly more correct nowadays to say we represent intentions in the language rather than problems.

I want to avoid using terms like language or grammar as they confuse the pattern explanation since most software engineers don’t knowingly write DSLs any more1. Instead, I want to refer to objects converting, querying, or analysing other objects.

The first example in the GoF book uses a Composite, which we must traverse to evaluate and construct an answer to a query. We can think of this first type of Interpreter as using objects in a structure to define an algorithm. Rather than have a fixed algorithm, we write the elements of it, such as filtering or recognising elements in a stream of input. This is the type of Interpreter from the example chapter. The structure of objects interprets another object or stream of data. I like to call this form interpreter-as-state, as it interprets, but how it interprets is by way of the state of the composite. An interpreter built from objects like this was the type used in the example in Chapter 10.

The unique thing about interpreter-as-state is that the pattern uses structured objects to transform, query, process or interpret something else. The structured objects remain constant and unaffected during the evaluation. The context often remains unchanged, and the result is a new object.

The second example in the GoF book shows something different and perhaps easier to comprehend. The second Interpreter pattern is one of an Interpreter object making changes to a structure. The Interpreter object traverses the composite and produces a result. It’s used to rewrite a boolean expression structure. First, by replacing variables with constants, then replacing variables with expressions. In short, this version of the Interpreter is a pattern representation of updating immutable data structures. Or, to put it another way, and the way I prefer to refer to it, it’s a monad.

The important thing about the monad is that this pattern adjusts the structured objects based on the context and the Interpreter object which walks it. The structure is the thing that changed. The Interpreter either modifies the structure objects or returns a new structure using similar types. You could also think of this version of the pattern as a reinterpreter.

I am yet to discover examples where both change at the same time. Either the structure interprets the context or is interpreted in light of the context. This dichotomy seems relevant, so I distinguish between the two patterns as each appears to solve a different problem, even if the solutions look similar on the surface. It’s another pattern suffering from inappropriate aggregation due to the solution-oriented approach.

Both Interpreter styles effectively represent solutions to many more problems than initially claimed. The examples show how to transform compounds of any form and build up small but effective DSLs using objects.

So, instead of thinking about Interpreter as defining a grammar, consider it to be a way to write procedures, queries, or transforms dynamically or how to mutate structured data. It does not need to be a tree to be a useful pattern. It does not need to define a language in a traditional sense. It does not need to interpret anything directly but only consume something and generate some interpretation.

A general misconception, and the third interpretation of the pattern, is that the interpreter pattern is about writing a parser—even though the GoF book explicitly states it is not. The book doesn’t explain how to create the tree in the first place, but a monad Interpreter could be a useful pattern in matching and parsing a small language into a structure that could be an interpreter-as-state Interpreter.

“How do you want me to transform this for you?”

1

Since the advent of object-oriented design and programming, the number of explicit DSLs I’ve seen has reduced dramatically. But, because the interaction of objects with structure against streams of structured and unstructured data is, in effect, a DSL, the number of actual DSLs has increased. This gap in understanding has possibly held back the interpreter pattern from more mainstream use.

Visitor

AKA: Walker

As you define more complicated Composites, you may need to walk their structure and operate on certain Composite or Leaf objects with your behaviour based on their type. In general, it’s considered poor etiquette to ask an object what type it is. This feature can be unavailable in some languages, such as C++, where run-time type information is optional.

Rather than look inside the object you want to react to, you ask each object to call you. When you traverse the Composite, each component will call into a specific method of the Visitor object provided to the traversal method. You create a concrete Visitor object which listens by overriding these methods.

Visitor

However, the visitor does not need to traverse a structure. The vital part of the pattern is the Accept method. The calling back from the visited object drives the type-specific callback into the visitor. You can see an example of traversal-free visiting in C++. There is an std::visit function that calls whichever method can bind to the visited variant.

It would have been better had the GoF book not started with the visitor attached to the process of structure traversal. Many developers were caught out by the structure walking and overlooked the significance of the type-based callback.

“For the sake of propriety, I’ll introduce myself.”

Iterator

AKA: Cursor

Those who have written any C++ may know the iterator idiom from the STL. The Iterator pattern is not quite the same. The motivation is the same, but the implementation is quite different. An Iterator allows the user to control the progress through an aggregate without opening up access to the internals but also allows for multiple concurrent traversals.

To create an Iterator, we ask the object to CreateIterator, which we store to use later. We can later call First to reset the Iterator, call IsDone to find out if we’ve reached the end of the container, and CurrentItem and Next to use the item at and step past the current position in the container.

In C++, the iterator is even simpler in structure as it does not necessarily include a pointer to the container. There is no reset, no done, only current, advance, and the capacity to compare with others of the same type. The Iterator pattern is a simple object to understand so it can potentially keep systems more tractable for a maintenance programmer. For C++, there is little reason to use them as the STL iterator idiom is very well known and provides a more effective solution in almost all cases.

An Iterator is another pattern of nominalisation. In this case, the object represents progress through a specific container. It abstracts the status of a loop, relinquishing control to the user rather than requiring callbacks as the container loops over its elements. It’s another inversion of control, but the other way around to usual.

“Next stop, element 12, end of the line.”

Observer

AKA: dependents, Publish-Subscribe

When you have some objects reacting to events over time, we’ll call these the Celebrities, other objects may want to react to their activity. When the Celebrities reach a specific state or take a particular action, these Fan objects want to know about it.

Polling is either expensive if done too often or misses events if not done often enough. One solution you might have is an event log containing everything every Celebrity does. Each Fan can filter the log for the events they’re interested in, but filtering takes time.

It would be better if the Celebrity events a Fan was interested in directly triggered an event on the Fan.

Observer

The implementation of the Observer class uses a protected Notify method so the Celebrities can post to their Fans. The simplest implementations send events to all fans on every change. This has two main issues.

The first is the broadness. Take care when creating Publications for changes. Changes can cascade, and small changes may go unnoticed without paying close attention. The default Update implementation does not include information about what changed. You could add it, but it would require introducing parameters to the update and notify methods specifying this, and some languages don’t have the flexibility to add the information efficiently.

The second issue has to do with entity lifetimes. I have an issue with object creation events. If we create a Celebrity, a Fan cannot learn of this by the Observer pattern as they cannot subscribe to an object that does not exist. Instead, there is a need for a broker of some sort to allow subscription to an as-yet non-existent Celebrity by some form of identifier.

Event logs can get around these concerns by keeping their events in a tree. Subscription can be written as a filtering expression of the tree, such as you might specify a folder in a file system. A publisher can then publish their updates to a subset of the tree, limiting detailed messages to only the actively interested Fans.

All this leads to a general pattern of messaging systems, but the GoF book has nothing to say about this. Instead, I would recommend reading up on the many messaging-related patterns in Enterprise Integration Patterns[EIP04].

“Subscribe now and never miss any of my content.”

Mediator

AKA: Director

When a set of objects react to each other’s state changes, the interactions can become complex. The objects involved become coupled to their specific use case and their relationships with the other objects. The objects may have had to derive from the regular objects to handle the need to send updates to their colleagues in the larger solution while avoiding cascading recursive calls.

A mediator or director object coordinates behaviour when specialising other objects is impossible or not desired. It reduces the need to write specific update code targeting specific objects, increasing the reusability of the interacting components because they no longer need role-specific customisation.

When an object would otherwise have had to inform a neighbouring object about its change, A mediator listens for changes in objects from generic endpoints. It can use Observer to achieve this. It responds to messages by issuing further messages to other objects. In fact, a mediator can usually be mostly implemented as a switchboard using multiple subscribers to decouple the implementation. The Mediator is an extraction of responsibility to inform into an object—a nominalisation of the specific relationships between instances.

“Let me coordinate this whole thing.”

State

AKA: objects for states

At some point, the logic in an object might become cumbersome, branching on attributes implying state. As those implicit states become essential to how the program runs across more and more methods, the chance for mistakes increases. Instead of using these implicit states, we migrate to an enum. But then we have introduced a switch into our code. These are often a sign of a non-extendable design.

Instead, we want to continue growing the program to completion but are aware of the risks of these state variables. It would be better to encapsulate the state in some way where it did not need to use a switch but still could transition as before.

To do this, our Stateful object changes its implicit class by holding onto an instance with the desired behaviour, a State object. The State object has a collection of methods as per the Adapter pattern, but each method also receives a context, the Stateful object.

The State object is itself stateless. It can be a Singleton. When a State object detects the need to change the state of the Stateful outer object, it does so via the context parameter.

I avoid the presented implementation of this pattern as I find it more complicated than helpful. Instead, I prefer when a State object returns a new state object (or itself) on every call. The returning variant allows the State object to have some state of its own. It can hand over any information to its replacement State and doesn’t need to know anything about its owner object. This way, you can have more than one state per object.

“I am, therefore I think.”

Patterns from A Pattern Language

A Pattern Language[APL77] contains 253 patterns. Summarising them would require a small book and bring little value as they are already compact. Instead, this section will summarise only those entries referenced in this book.

105 SOUTH FACING OUTDOORS

People use open space if it is sunny, and do not use it if it isn’t, in all but desert climates.

— Christopher Alexander, A Pattern Language, p. 514.

This pattern assumes a northern hemisphere location, so inversion is required for southern hemisphere constructions.

We want to reserve the best light of the sun for the outdoors of a building plot by giving it the part of the land upon which the sun beats down during the day. We want to make sitting outside, by the building, a positive experience.

When we allocate the outdoor space to the north side of a structure, the shadows might help protect us from the heat but offer nothing more to us. The sun provides warmth in the colder months, light to see, and the opportunity for plants to grow. An outdoor place that spends much of its time in shadow is less inviting.

Notice that even when the land is north of adjacent structures, it’s still more valuable when on the south side of your building. The shadows from neighbours may fall on part of your grounds, but the seating or land which seems most like it belongs to your building is on the land that gets the light. The connection to your building makes it your south-facing outdoors.

112 ENTRANCE TRANSITION

Buildings, and especially houses, with a graceful transition between the street and the inside, are more tranquil than those which open directly off the street.

— Christopher Alexander, A Pattern Language, p. 549.

A space separated from another may have a different level of intimacy or another special meaning. In cases where the moods are sufficiently distinct, an intermediate place where a person can be between worlds helps ease the transition. People feel like they shift inside themselves as they enter a new space via an entrance transition. They don’t just shed themselves of their coat, but also the worries of the world outside.

When this pattern is lacking, we find sanctums disturbed by public observation or quiet workspaces made tense by the threat of external interruptions. A house built with a street-facing living room and street door without a transition zone, even a small one, is never as cosy as it could be as the outside world directly touches the intimate space. An open-plan workplace can create a difficult atmosphere where there is no privacy, and quiet contemplation is impossible.

Anywhere humans have different modes of being, they can benefit from a transition zone like this. Where you see a step change in how people behave, ensure there is a space where the attitude change can happen at a comfortable pace.

119 ARCADES

Arcades—covered walkways at the edge of buildings, which are partly inside, partly outside—play a vital role in the way that people interact with buildings.

— Christopher Alexander, A Pattern Language, p. 581.

Arcade Arcades where the line between the shops and streets blur

Supporting the property of both deep interlock and boundaries, arcades connect the indoor spaces of buildings with the outside public world of people. Christopher Alexander used arcades in the Eishin campus project to create some of the covered walkways. The pattern creates outdoor spaces which are innately connected to the buildings.

127 INTIMACY GRADIENT

Unless the spaces in a building are arranged in a sequence which corresponds to their degrees of privateness, the visits made by strangers, friends, guests, clients, family, will always be a little awkward.

— Christopher Alexander, A Pattern Language, p. 610.

As you enter a house, the first rooms are still somewhat public. As you move from room to room, entering deeper into the home, you find places of deeper family meaning. You find hobby spaces and bedrooms. Children’s caves and hidden nooks.

Without the gradient, a home has rooms which feel awkward. A door on a studio flat opens onto the one room and never feels wholly a home. There is no space separated from the rest; there is no interior to the interior.

A good gradient will mean there are places where guests can feel welcome, and they can naturally gravitate towards places for their particular relationship with the family. Some may find acceptance in the garage, others the kitchen or garden shed with their host. But notice that only the family or very close friends will feel comfortable entering into all the inner family spaces.

159 LIGHT ON TWO SIDES OF EVERY ROOM

When they have a choice, people will always gravitate to those rooms which have light on two sides, and leave the rooms which are lit only from one side unused and empty.

— Christopher Alexander, A Pattern Language, p. 747.

Places with diffuse light allow us to see people more clearly. Having two or more natural light sources helps create rooms where you can see with less glare. We feel better in places like this because we observe the smaller movements on people’s faces and feel more connected to them.

With only one light source, a room will often be darker towards one end, creating shadows on people’s faces. Shadows create ambiguity and tension.

Narrow rooms with light at the end are the worst. Fix them by finding some way to open up light at the other end, or consider collapsing the room altogether. Sometimes, you can be better off with one nice dual-purpose room than two separate rooms of lower quality. A low wall dividing the single room well can make the utility obvious without affecting the quality of the light.

You don’t need to have two windows; just be aware that the deeper a room, the less reflective the walls, and the smaller the windows, the worse the room will feel to be in. Vast halls with huge windows on one side can work well if there are bright walls or other light sources to supplement the natural light. For example, rooms with a large mirror are more inviting as they naturally create a second set of light sources.

202 BUILT-IN SEATS

Built-in seats are great. Everybody loves them. They make a building feel comfortable and luxurious. But most often they do not actually work. They are placed wrong, or too narrow, or the back does not slope, or the view is wrong, or the seat is too hard. This pattern tells you what to do to make a built-in seat that really works.

— Christopher Alexander, A Pattern Language, p. 925.

The pattern is quite clear. Built-in seats are lovely. They give a room character. They may seem inflexible, but the pattern addresses that by asking that they be built with great attention to detail. When a seat is just right, moving it would be as unnecessary as moving a window.

Why do we like them? I think it’s because they feel so much more attached (literally) to the building, but also how they connect us to the outside world. Every building is an inside within an outside. Many patterns address this to make it the best possible combination of the two. Built-in seats are just another tool to do that.

222 LOW SILL

One of a window’s most important functions is to put you in touch with the outdoors. If the sill is too high, it cuts you off.

— Christopher Alexander, A Pattern Language, p. 1051.

A strange revelation with some research backing it up is that windows should start lower in the wall than we regularly build them. Windows for sitting by should be made so you can connect with the ground outside, which means the regular sills we build are too high as they cut off all but the horizon when regarded from a seated position.

For upper floors, the connection to the ground is overwhelmed by the desire for safety, so as the elevation increases, so does the height of the sill.

A low and deep sill also invites plants or sitting in. Even with a 30-35cm high sill, we don’t lose any of their normal function. Instead, we gain the world.

224 LOW DOORWAY

High doorways are simple and convenient. But a lower door is often more profound.

— Christopher Alexander, A Pattern Language, p. 1056.

This is a pattern of enforcing a change in the person transitioning a threshold by asking them to stoop. It’s one of a few patterns using doorways to address problems.

Requiring a moment of thought by a physical presence is a recurring theme. I would consider the placement of some actionable religious ornamentation in houses (the Jewish mezuzah comes to mind) or places of worship (the location of a font in a church, for example) as examples of this higher pattern. The low doorway follows this principle. Make a place more profound by ensuring some mental shift takes place. Distract a person into being present.

226 COLUMN PLACE

Thin columns, spindly columns, columns which take their shape from structural arguments alone, will never make a comfortable environment.

— Christopher Alexander, A Pattern Language, p. 1065.

The smaller supporting columns we typically see these days break up a space but don’t add anything in return. Good-sized columns are places in themselves. We can lean against them, and they can be imprinted with images or otherwise decorated. When they are substantial enough, we can attach memories to them, or even seats.

Without the girth sufficient to support human interaction, they become mere obstacles. Devoid of any value beyond supporting what lies above. The pattern name implies the outside of the column is a place, which is only correct when it provides something for humans as they move through their world.

In a modern setting, large columns are not unwanted. People still want focal points around which they can gather, even inside an office building. Otherwise, we would not have water cooler chats.

229 DUCT SPACE

You never know where pipes and conduits are; they are buried somewhere in the walls; but where exactly are they?

— Christopher Alexander, A Pattern Language, p. 1076.

This pattern follows from other patterns, somewhat relying on having followed the form of vaulted ceilings, so is not always applicable. The basic idea is, have a place which makes sense to put all the wiring and piping but also make sure it’s big enough that maintenance is simple and convenient. Rather than putting everything at floor level or in the walls as we do currently, locate them all up in the air above the furniture and out of the way.

To my mind, this pattern makes a lot of sense if you can do it. It’s almost an inversion of the standard way in which you wire up a modern office with a raised floor so you can easily trace wires anywhere without causing an inconvenience. However, even with a raised floor, desks need to be moved to trace new paths. The benefit of locating above the space rather than below is that above is always more readily available.

237 SOLID DOORS WITH GLASS

An opaque door makes sense in a vast house or palace, where every room is large enough to be a world unto itself; but in a small building, with small rooms, the opaque door is only very rarely useful.

— Christopher Alexander, A Pattern Language, p. 1103.

When a sequence of rooms do not each have their own light, a glass panel in a substantial door provides a way to exclude the outer while maintaining a connection to it. A room with a closed door always seems bounded and enclosed. This is why we have large panes of glass in patio doors leading onto gardens. It is so we can remain separate while connected.

Using interior doors with glass does the same for the house’s sense of connectedness. We allow rooms to be closed, but still part of the activity. The light transfer helps equalise the feeling while also increasing the literal level of light, helping achieve 159 LIGHT ON TWO SIDES OF EVERY ROOM.

251 DIFFERENT CHAIRS

People are different sizes; they sit in different ways. And yet there is a tendency in modern times to make all chairs alike.

— Christopher Alexander, A Pattern Language, p. 1158.

It seems obvious when you think about it. We have office chairs, designed specifically to address this issue. They are expensive because they meet two opposing needs: uniformity and conformity. An office chair is expensive because making something look like everything else while behaving in a custom manner is a difficult tension to relieve.

So, the solution is to stop trying to make everything look the same. Accept that differences are normal and that on average no-one is average.

Other software patterns

Outside of the patterns in the GoF book[GoF94], there were many more tomes of patterns available. Some were more successful than others. Some did not even claim to be design pattern books. Of those books, I have referenced only a few patterns, but for completeness, I summarise them here.

Ubiquitous language

From Domain Driven Design[DDD04], the pattern of enforcing everyone involved in the development to use the same terms to describe things as the experts in the domain that we’re developing for.

In many developments, you will notice how software developers speak in code. Not literal programming languages, but they will refer to things by how they are implemented, not by what they mean in the domain. This has a disconnecting effect on their interactions with the experts but also doesn’t make connecting layers of the code together very easy either.

When you think of something in the domain, such as a boat for a holiday fishing company, a programmer might think of it as a row or column in a schedule. If they start calling a boat a boat-column, it might confuse the experts, but not quite as much as calling it a row-boat. However, when they start referring to them as schedule items rather than literal machines, some nuance is lost. Maintenance and staffing information is complicated, and when someone points out some boats have more seats, others have wheelchair fittings, and some have different setups which can only be captained by certain crew, the information that a human scheduler needs to do their job would be missing.

The Ubiquitous language pattern requires the same language to be used everywhere, from conversations about the design to the names of classes in the model. When a term has been found to be ambiguous, it’s time to refactor.

The benefit of Ubiquitous language is that it helps find these differences by forcing the developer to create the natural interfaces for all these different views on the world. As a programmer, you likely think about files and classes, but the user doesn’t think about those. An album for one user is their copy on CD, not yet arrived. For another, it’s a title to purchase and keep on a cloud account. For yet another, it’s the sheet music or the art on the cover, or possibly even an income stream for their performance.

Giving things the right name helps find what can and what should not be available to those interacting with the objects.

Bounded context

Also from Domain Driven Design[DDD04], the pattern of being aware of a mapping between the context and model and how there can be more than one of each.

Different experts use different ways to identify the same things at different times. Someone will refer to the patient by their ailment, someone else by their name, and another only knows them by which bed they are in.

In projects where there are strongly separated user groups, you will find they do not always share a common language. If your users from group A use the same word for three different things used by group B, then you have a problem that is not easily resolvable. A Bounded context is one where this information loss gap is recognised and protected against. The pattern requires that the contexts be identified and the border between them be observed. This pattern is a precautionary tale for accidentally assuming you share a data model and making assumptions about the data from the point of view of one context alone.

Specification

The Specification pattern from Domain Driven Design[DDD04] describes an object, a ‘spec’, you can use as a predicate. The query method on the spec takes a candidate and returns a true or false response. A spec can be as simple as constantly returning true or false, but is usually configured in some way. Each instance of a spec is a reusable test. Constructor arguments set up the spec object to deliver a verdict later, such as verifying whether the object was over or under a limit. Its behaviour will generally stay the same once constructed. Think of spec as encapsulating judgement in an object.

In the example in chapter 10, the spec was realised in Python as a lambda. The principle of a spec is not that it is an object, but that it can be used as a decision-maker.

The examples often add the composition of spec objects with other spec objects, such as the unary NOT spec and the binary AND and OR specs that take other specs as arguments during construction and combine them logically during query evaluation.

Collect Low-level Protocol

One of the very earliest patterns in software. It is from Using Pattern Languages for Object-Oriented Programs[UPL87].

The pattern suggests, after initial decomposition, find the low-level code necessary to support the decomposition. Utility methods, protocols, file formats, that kind of thing. A process pattern implying an up front requirements and object modelling before committing to any coding.

Pipes and filters

Published in Pattern Oriented Software Architecture[POSA96], page 53, this architectural pattern provides a structure for processing data. This pattern can be the backbone of the complete application, but these days is a subarchitecture found in many other applications that process data on demand.

The basic premise is that of data streams passing through filters. The filters can do much more than cull; they can adjust, emit more, or completely transform the data. Each filter can take more than one input and can produce more than one output if necessary. The main point is that each filter can be a pure function of the incoming data.

This eminently debuggable processing pattern has been seen repeatedly appearing over the decades of software development. It appears to be a self-forming solution from the context of having a lot of the same format of data, wanting to transform it in the same way, and not necessarily being able to store all the data in working memory at the same time. It’s also emergent from wishing to create complicated purely functional transformation sequences as in-place mutation normally associated with sequences of operations is usually not available in pure functional languages.

Whole-part

Also published in Pattern Oriented Software Architecture, the Whole-part pattern suggests that some objects are more than the sum of their parts. There are three variants of the Whole presented, from a strict assemblage to a loosely connected set of objects. In every case, the Parts held within make up part of the information of the object, but there are capabilities specific to the Whole.

An example I like to provide is based on having a set of points. Each point has a position in space. A point can be moved in space, and any software that handles transformations of these points should be able to adjust their position. However, it is only once you collect the points into a composite or compound object representing more than one of them that the option to scale or rotate them makes any sense. In essence, before the Parts are a collection, orientation and scale cannot exist, and it is from the Whole that this property emerges.

The pattern solution suggests keeping the elements inaccessible to others and provide an interface only to the greater Whole object. In this pattern, Parts are encapsulated in the Whole object.

Bibliography

[AFo21CA93] A Foreshadowing of 21st Century Art - The Geometry of Very Early Turkish Carpets

Christopher Alexander, 1993, Oxford University Press

[APL77] A Pattern Language

Christopher Alexander, Sara Ishikawa, Murray Silverstein, with Max Jacobson, Ingrid Fiksdahl-King, Shlomo Angel, 1977, Oxford University Press

[AScrumBook19] A Scrum Book: The Spirit of the Game

Jeff Sutherland, James O. Coplien, The Scrum Patterns Group, 2019, Pragmatic Bookshelf

[AaC72] Autopoiesis and Cognition: The Realization of the Living

Humberto Maturana, Francisco Varela, 1972, D. Reidel Publishing Company

[Almanac00] The Pattern Almanac 2000

Linda Rising, 2000, Addison-Wesley

[ArtFear94] Art & Fear: Observations on the Perils (and Rewards) of Artmaking

David Bayles, Ted Orland, 1994, Capra Pr.

[Battle12] The Battle for the Life and Beauty of the Earth: A Struggle Between Two World Systems

Christopher Alexander, HansJoachim Neis, Maggie Moore Alexander, 2012, Oxford University Press

[CC08] Clean Code: A Handbook of Agile Software Craftsmanship

Robert C. Martin, 2008, Pearson Education, Inc.

[CNT19] Cloud Native Transformation: Practical Patterns for Innovation

Pini Reznik, Jamie Dobson, Michelle Gienow, 2019, O’Reilly Media, Inc.

[CollaborationP14] Collaboration Patterns: A Pattern Language for Creative Collaborations

Takashi Iba, Iba Laboratory, 2014, CreativeShift

[Crisis82] Out of the Crisis

W. Edwards Deming, 1982, Massachusetts Institute of Technology, Center for Advanced Engineering Study

[DAM88] Design After Modernism: Beyond the Object

John Thackara, 1988, Thames and Hudson

[DDD04] Domain Driven Design

Eric Evans, 2004, Pearson Education, Inc.

[DLC82] The Linz Café, Das Linz Café

Christopher Alexander, 1982, Oxford University Press

[DoET88] The Design of Everyday Things

Donald Norman, 1988, Basic Books

[Drive11] Drive, the surprising truth about what motivates us

Daniel H. Pink, 2011, Canongate Books, Ltd

[EIP04] Enterprise Integration Patterns

Gregor Hohpe, Bobby Woolf, etal, 2004, Pearson Education, Inc.

[EPE00] Extreme Programming Explained

Kent Beck, 2000, Addison-Wesley

[Fearless04] Fearless Change: patterns for introducing new ideas

Mary Lynn Manns Ph.D., Linda Rising Ph.D., 2004, Addison-Wesley

[Formal63] The Formal Basis of Modern Architecture

Peter Eisenman, 1963, in 2006 by Lars Müller Publishers

[GPP11] Game Programming Patterns

Robert Nystrom, 2011

[GoF94] Design Patterns: Elements of Reusable Object-Oriented Software

Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, 1994, Addison-Wesley

[Goal84] The Goal: A Process of Ongoing Improvement

Eliyahu M. Goldratt, Jeff Cox, National Productivity Institute, 1984, National Productivity Institute (South Africa) (Box 3971, Pretoria)

[Grabow83] Christopher Alexander, The Search for a New Paradigm in Architecture

Stephen Grabow, 1983, Oriel Press Ltd

[HBL94] How Buildings Learn, What Happens After They’re Built

Stewart Brand, 1994, Viking

[HFDP04] Head First Design Patterns

Eric Freeman, Elisabeth Robson, with Kathy Sierra and Bert Bates, 2004, O’Reilly Media, Inc.

[Hooked14] Hooked: How to Build Habit-Forming Products

Nir Eyal, 2014, Portfolio Penguin

[KB05] Extreme Programming Explained: Embrace Change

Kent Beck, Cynthia Andres, 2005, Addison-Wesley

[Kilmann84] Beyond The Quick Fix: managing five tracks to organizational success

Ralph H. Kilmann, 1984, Jossey-Bass Publishers

[Lawler83] Pay and Organizational Development

Edward E. Lawler, 1983, Addison-Wesley Publishing Company

[LearningP14] Learning Patterns: A Pattern Language for Creative Learning

Takashi Iba, Iba Laboratory, 2014, CreativeShift

[Lila91] Lila: An Inquiry into Morals

Robert M. Pirsig, 1991, Bantam Books

[MLDP20] Machine Learning Design Patterns

Valliappa Lakshmanan, Sara Robinson, Michael Munn, 2020, O’Reilly Media, Inc.

[MMM75] The Mythical Man Month

Fred Brooks, 1975, Addison-Wesley

[MPDfC98] Multi-Paradigm Design for C++

James O. Coplien, 1998, Addison-Wesley

[MapReduce12] MapReduce Design Patterns

Donald Miner, Adam Shook, 2012, O’Reilly Media, Inc.

[NODEJS16] Node.js Design Patterns

Mario Casciaro, Luciano Mammino, 2016, Packt Publishing Ltd.

[NoO1-01] The Nature of Order: An Essay on the Art of Building and the Nature of the Universe. Vol 1. The Phenomenon of Life

Christopher Alexander, 2001, Center for Environmental Structure: Berkeley, CA, USA

[NoO2-02] The Nature of Order: An Essay on the Art of Building and the Nature of the Universe. Vol 2. The Process of Creating Life

Christopher Alexander, 2002, Center for Environmental Structure: Berkeley, CA, USA

[NoO3-05] The Nature of Order: An Essay on the Art of Building and the Nature of the Universe. Vol 3. A Vision of a Living World

Christopher Alexander, 2005, Center for Environmental Structure: Berkeley, CA, USA

[NoO4-04] The Nature of Order: An Essay on the Art of Building and the Nature of the Universe. Vol 4. The Luminous Ground

Christopher Alexander, 2004, Center for Environmental Structure: Berkeley, CA, USA

[Notes64] Notes on the Synthesis of Form

Christopher Alexander, 1964, Harvard

[OPoASD04] Organizational Patterns of Agile Software Development

James O. Coplien, Neil B. Harrison, 2004, Pearson, 1st edition

[Outlining11] Outlining Your Novel

K. M. Weiland, 2011, PenForASword

[PH98] The Patterns Handbook

Linda Rising, 1998, Cambridge University Press

[PLoPD3-97] Pattern Languages of Program Design 3

Robert C. Martin, Dirk Riehle, Frank Buschmann, 1997, Addison-Wesley

[PLoPD5-06] Pattern Languages of Program Design 5

Dragos Manolescu, Markus Voelter, James Noble, 2006, Addison-Wesley

[PLoPD95] Pattern Languages of Program Design

James O. Coplien and Douglas Schmidt, 1995, Addison-Wesley

[POSA2-00] Pattern-Oriented Software Architecture Volume 2: Patterns for Concurrent and Networked Objects

Douglas C. Schmidt, Michael Stal, Hans Rohnert, Frank Buschmann, 2000, John Wiley & Sons Ltd.

[POSA3-04] Pattern-Oriented Software Architecture Volume 3: Patterns for Resource Management

Michael Kircher, Prashant Jain, 2004, John Wiley & Sons Ltd.

[POSA4-07] Pattern-Oriented Software Architecture Volume 4: A Pattern Language for Distributed Computing

Douglas C. Schmidt, Frank Buschmann, and Kevlin Henney, 2007, John Wiley & Sons Ltd.

[POSA5-07] Pattern Oriented Software Architecture Volume 5: On Patterns and Pattern Languages

Douglas C. Schmidt, Frank Buschmann, and Kevlin Henney, 2007, John Wiley & Sons Ltd.

[POSA96] Pattern-Oriented Software Architecture Volume 1: A System of Patterns

Hans Rohnert, Regine Meunier, Frank Buschmann, Michael Stal, Peter Sommerlad, 1996, John Wiley & Sons Ltd.

[PT15] Pattern Theory: Introduction and Perspectives on the Tracks of Christopher Alexander

Helmut Leitner, 2015, Helmut Leitner, HLS SOFTWARE

[PoEAA03] Patterns of Enterprise Application Architecture

Martin Fowler, 2003, Pearson Education, Inc.

[PresentationP14] Presentation Patterns: A Pattern Language for Creative Presentations

Takashi Iba, Iba Laboratory, 2014, CreativeShift

[React17] React Design Patterns and Best Practices

Michele Bertoli, 2017, Packt Publishing Ltd.

[RtP04] Refactoring to Patterns

Joshua Kerievsky, 2004, Addison-Wesley

[Stick07] Made To Stick

Chip Heath and Dan Heath, 2007, Random House

[SysBible02] The Systems Bible

John Gall, 2002, The General Systemantics Press

[TFaS11] Thinking fast and slow

Daniel Kahnemann, 2011, Farrar, Straus and Giroux

[TKCC95] The Knowledge Creating Company

Hirotaka Takeuchi and Ikujiro Nonaka, 1995, Oxford University Press

[TOE75] The Oregon Experiment

Christopher Alexander, Murray Silverstein, Shlomo Angel, Sara Ishikawa, Denny Abrams, 1975, Oxford University Press

[TPoH85] The Production of Houses

Christopher Alexander, Howard Davis, Julio Martinez, Donal Corner, 1985, Oxford University Press

[TPoS58] La poétique de l’espace

Gaston Bachelard, 1957, Presses Universitaires de France, Paris

[TT19] Team Topologies: Organizing Business and Technology Teams for Fast Flow

Matthew Skelton, Manuel Pais, 2019, IT revolution press LLC

[TTWoB79] The Timelines Way of Building

Christopher Alexander, 1979, Oxford University Press

[Taylor1911] The Principles of Scientific Management

Frederick Winslow Taylor, 1911, Harper & Brothers

[Urban87] A New Theory of Urban Design

Christopher Alexander, Hajo Neis, Artemis Anninou, Ingrid King, 1987, Oxford University Press

[Use93] Usability Engineering

Jakob Nielsen, 1993, Morgan Kaufman, Academic Press

[ZatAoMM74] Zen and the Art of Motorcycle Maintenance: An Inquiry into Values

Robert M. Pirsig, 1974, William Morrow and Company

[LIP09] Language Implementation Patterns

Terence Parr, 2009, Pragmatic Bookshelf

[LibStruct14] The Surprising Power of Liberating Structures

Henri Lipmanowicz, Keith McCandless, 2014, Liberating Structures Press

[PiGD05] Patterns in Game Design

Staffan Bjork, 2005, Charles River Media

[ACinaT65] A City is Not a Tree.

Alexander, C., 1965

[Decomp72] On the Criteria To Be Used in Decomposing Systems into Modules

David Lorge. Parnas, 1972

[Iba2021b] Extracting Key Elements in Pattern Mining

Takashi Iba, Yuya Oka, Haruka Kimura, Erika Inoue, 2021

[IbaHtWP21] How to Write Patterns

Takashi Iba, 2021

[TSADDL] Towards Simplifying Application Development, in a Dozen Lessons

Melvin E. Conway, January 3rd, 2017

[UPL87] Using Pattern Languages for Object-Oriented Programs
https://c2.com/doc/oopsla87.html
Kent Beck, Ward Cunningham, September 17, 1987, Technical Report No. CR-87-43

[Debate82] THE DEBATE: Contrasting Concepts of Harmony in Architecture
http://www.katarxis3.com/Alexander_Eisenman_Debate.htm
Christopher Alexander, Peter Eisenman, November 17th 1982

[AM01] Agile Manifesto
https://agilemanifesto.org
Kent Beck, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Robert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland, and Dave Thomas, 2001

[Broadside] Writing Broadside
https://www.dreamsongs.com/Files/WritingBroadside.pdf
Richard P. Gabriel, 1996