Poisoned

Systems are not agents and never act decisively. Rather than cause immediate catastrophic failure, they introduce a slow death to anything threatening their existence. Over the last 20 years, the established software development world’s defence mechanisms poisoned the meaning of design patterns. The reduced efficacy brought less attention, lessening the need for rigour or depth of analysis. With weaker patterns accepted and published, the validity of the design pattern concept degraded further. The masses of patterns beyond those in the GoF book[GoF94] became a swamp, safe to navigate only for the experienced pattern enthusiast. Many fantastic patterns were drowned out in the mire of poorly conceived and sometimes flat-out incorrect anti-patterns or fabrications.

In the early Pattern Languages of Program Design books, there are many examples of patterns that do not pass the test. Surprisingly, the very first pattern in Pattern Languages of Program Design[PLoPD95], Functionality Ala Carte 1, almost passes the criteria for Christopher Alexander’s Madhouse Balcony non-pattern description. It does not make the world better. It kicks complexity over the wall.

The pattern begins by describing the problem of providing ever-increasing simulation fidelity while running on the same hardware. The forces at play are a demand for further features while retaining system responsiveness. The solution presented is one where the developer abstractly reveals the features’ impact (the runtime cost). The end user is then expected to make informed decisions about which features they want by picking them from this menu of items while not needing to be technically savvy.

Suspicious lack of insight

In addition to the problem of adding some complexity to support the configurability, the pattern suffers from having only a single recorded appearance. It does not appear to refer to any other found examples and has no constraints. If it were a pattern, we should expect some reference to how others solved the problem in their context. As it is, the pattern simply suggests the decisions can be given to someone else to worry about. Rather than solve the problem, make it someone else’s problem.

If you have played computer games on both PC and console, you might recall a difference between those two environments concerning fidelity. Console games do not generally make graphics settings the problem of the player. Only with the advent of modern consoles and 4K HDR TV has the option become relatively acceptable to present to the player, and even then, only if the game has a broad enough demographic to garner a genuine benefit related to player preference.

Adapting fidelity in real-time can drive level-of-detail systems and upscaling stages to handle highly complex scenes while not dropping frames. We can do this per frame, which is much better than asking players to balance their diet. We see this superior pattern play out in productivity software where the software reduces fidelity while elements move, such as when adjusting filters and effects in a photo editing application. As I think about it, I realise there’s an element of this pattern in video streaming and autocomplete for web searches; the quality of the result in both cases is lower, but the latency and availability is considerably better.

For anyone working within such a domain, the pattern misses obvious recurring bugs or problems. This suspicious lack of problems leads to their being no wisdom but also no authority. This lack of insight can be one of our metrics for rejecting a pattern. We come up against this lack of conflict again later with the Command pattern in the reference section.

So, Functionality Ala Carte is not a fully formed pattern, and what little is a pattern might be an anti-pattern, as it is a self-serving pattern that pushes the deeper problem to another element or owner.

Provably false

Another pattern from the first PLoPD book2 was in ‘A Generative Development-Process Pattern Language’. Pattern number 42 is Compensate Success. The pattern cites a problem of providing appropriate motivation for success. The context is an organisation with tight schedules and a high-payoff market. So, I would say that’s a typical software development environment. The following section on forces is interesting for how opinionated it is.

Schedule motivations tend to be self-fulfilling: a wide range of schedules may be perceived as equally applicable for a particular task. Schedules are therefore poor motivators in general. Altruism and egoless teams are quaint, Victorian notions. Companies often embark on make-or-break projects; such projects should be managed differently from others. Disparate individual rewards motivate those who receive them, but they may frustrate their peers.

— James Coplien, Pattern Languages of Program Design[PLoPD95], p. 234.

I agree with the first point, about schedules not being good motivators, but less so with the others. And these all still feel like context to me. They set the scene, but what’s the force? If we try to turn this into a pattern, it must resolve something unresolved. Perhaps the force is the desire of an organisation to get the best performance out of the individual developers. I can get behind this as a problem worth attempting to solve.

So, it’s still a problem of motivation, but at least we’re stating it clearly. Now, it’s become evident that the organisation has a different goal from the individual. Otherwise, why would the organisation think it needs to motivate the individual?

The second and last points are highly problematic, but there are references to support the claim, so perhaps I’m wrong? The first is Pay and Organisation Development[Lawler83], which is all about pay and incentive, but when you read the book, it is not about mind-workers but about workers in general, in plants. In fact, on page 21, you see a graph showing the flow of motivation, effort, and performance in little boxes. This is on the opposite page to where the text considers the mental questioning of how much an individual considers performing.

Given a number of alternative levels of behaviour (ten, fifteen, or twenty units of production per hour, for example), an individual will choose the level of performance which has the greatest motivational force associated with it, as indicated by a combination of the relevant expectancies, outcomes, and values.

— Edward E. Lawler, Pay and Organisation Development, p. 20.

Quaint Victorian notion

When was the last time you considered performing? Have you ever considered putting in more or less effort? Based on reward or otherwise? Or, like most software developers, do you work as best you can because the real reward is completing the work? The intrinsic reward for being good at what you do, knowing you positively impact your community, and perhaps gaining a little self-improvement? The only externally provided reward I cherish is the opportunity to do things my way.

The pattern appears to be backing up its claim based on evidence gathered from factory and plant workers. Physical labourers might increase performance when you compensate them with additional pay, but I do not believe software developers are labourers.

The second reference is no better, citing:

Organisations offer rewards; individuals offer performance.

— Ralph H. Killman, Beyond the Quick Fix[Kilmann84], p. 229.

What makes this evidence even more inapplicable is how it even steps on its own toes when later claiming that intrinsic rewards are valuable, but extrinsic (employer-given) rewards seem to cause problems.

For example, if employees’ paychecks are out of line with what they believe they deserve, they become very upset. If employees feel that others are getting more pay for doing less work, they become angry.

— Ralph H. Killman, Beyond the Quick Fix, pp. 233–234.

The work by Daniel Pink on drive in Drive[Drive11] and Herzberg’s two-factor theory confirm and extend the content in this part of the book and show how the pattern is not actually seen in the real world.

We can also see that extrinsic rewards do not make the world more wholesome by the observation that bonuses are natural Cobra Effect triggers. Motivating your high-performance individuals is not a job of cash or celebration but one of understanding what drives them and giving them more of it. Herzberg’s two-factor theory suggests money does not motivate; it only demotivates by absence.

Daniel Pink’s Drive suggests giving people more autonomy and more opportunities to grow their skills. The best way to release someone’s full potential is to give them more chances to meaningfully contribute to the success of their community. What’s more, Ralph H. Killman suggested these same actions!

At this point, I claim the present form of Compensate Success must not be a pattern and might even be an anti-pattern. All the related patterns in the language will now be under scrutiny because they seem suspicious by acquaintance. Code Ownership is an old idea that has had its time but does not work well. Size the Schedule looks too much command-and-control and avoids early feedback. Solo Virtuoso smells like someone just wanted some autonomy. Other patterns similarly seem ill-conceived: Developer Controls Process, Fire Walls, Gatekeeper, Divide and Conquer, Decouple Stages.

Upon review, the pattern language fits a specific environment. I recognise it from first-hand experience. An ego-driven developer without appreciation for the complexity of a large, healthy organisation would feel comfortable with these patterns. I can see why this pattern language might have been published. Many of the reviewers likely felt at ease with the prescriptions. Most software developers likely conform to this description. This person could have been myself within the first decade of my career.

Diluted

With the patterns now diluted to a collection of mixed quality and credibility, the GoF book started to look like a source of consistency. The development world was in the middle of its object-oriented frenzy, with nearly every mainstream language being object-oriented or providing direct support for the paradigm. The solutions found in the GoF book applied to most developments because they were fundamental and paradigm-oriented rather than problem-centric. But with that came the inversion—the solution in search of a problem.

Developers early in their careers were tempted into believing there was a pattern for every concern. They would ask which design pattern they should use to solve their problem. In a world where all the design problems of programming are known or even knowable, this might have been a good practice for beginner programmers. They should not need to solve unsolved problems so early in their career. However, we do not live in such a world. Programming itself has not been around long enough to have most of its problems solved. Looking around at the state of software development, we’re learning about more things we haven’t yet solved3 faster than we’re solving the things we already know about.

Only having 23 famous and credible patterns means there won’t likely be a pattern to solve your problem. The question then posed becomes a mind-narrowing activity. The developer stops thinking about the situation but starts looking to see which of the 23 solutions fits it best. Thus, we end up with misused patterns, which deal a blow either to their credibility or the developer’s capability. Neither of which is desirable.

Design patterns, or at least the way they are portrayed, are also toxic, blocking ingenuity in a different way. The Sapir–Whorf hypothesis4 explains how the structure of the language we use and the idioms we live with help define what kinds of nuances we can communicate. As invention is often the combination of multiple ideas in our minds and making new connections, being unable to think about things as different from each other or from a new perspective becomes stifling. If you know a pattern matching a problem and the language overlaps your use case, you’re likely to assume the pattern is suitable and try to find a way to bend it into your problem space to implement a solution. If your language includes these patterns, there’s inertia against thinking about an alternative pattern that solves the problem more elegantly.

So, as an example, let’s talk about the Singleton pattern. Yes, I like to pick on this pattern, but that’s because it’s due some proper derision. As a reminder, the Singleton pattern is meant to help ensure a class can have no more than one instance and provide a global point of access to that instance. However, it also accidentally ends up doing the following:

  • It locks us to a single class (which hurts testing by making unit tests harder to isolate and mocks harder to inject).
  • It provides a creational behaviour, creating the object just in time,
    • which affects runtime performance because construction is not free, but when it happens, no one knows;
    • thus there is no decided order of construction, making initialisation less well defined, and some orderings may hide bugs; and
    • they are potentially never destroyed, meaning clean-slate tests can’t work, and we can’t be sure we’ve freed all our resources because we never naturally achieve complete shutdown.

Notice how solutions involving a Singleton might only need one or two aspects. Consider the idea of a Singleton used for holding access to a logging service. We want general access to logging, so we expose the LoggingServiceSingleton, which has methods for logging, setting the verbosity, and creating new channels for your messages. But why do we need a Singleton for this design? What are the features required? Global access? Yes. Only one instance? Not sure about that. What if a different system wanted to, for security reasons, have its own logger encrypted at the source? And what about the lifetime of the logger? Do we really want it to exist when it’s first used, or do we want to control when it is created so that we don’t start logging before we know we should or where we should be logging to? It’s beginning to feel like we want something different.

Another implementation of Singleton can be when a repository is responsible for an object, but there’s only one instance in the repository. This is a Singleton as there’s a global way to access it, and we have only one instance. When you think about it, how are any of the long-lived tables in a database different from a Singleton? Now, what about null? Isn’t that also a Singleton? We have global access to it, and there is only one null object. But how often have you heard anyone refer to null as a Singleton? How many times has anyone referred to using an SQL query to get at a Singleton? Language encourages thinking, but names can stifle it.

Okay, enough of the Singleton pattern. What other patterns hinder alternative thinking about solutions?

Mediator stands in for having strong idioms of messaging or a message bus. Composite gets in the way of thinking about structure as being external to the objects within it, so it promotes the idea of structure as something intrusive and renders membership in multiple structures unnatural. Interpreter begins to address this, but so few people understand it, Composite then tends to become a lock-in model. Strategy stops you from thinking about currying and lambdas. Decorator starts to address currying but then breaks the model by suggesting wrapping more than one method at a time. And Flyweight, if you ever even think about it, stops you thinking about better data models than object-oriented.

One of my favourites to point out would be Command and Memento for stateful structure manipulation and undo operations. It stops you from thinking about immutable state solutions, which can lead to interesting new opportunities. I cannot claim to have come up with the excellent solutions presented by Sean Parent, who showed5 how they used immutable state to provide history and undo procedures for Adobe® Photoshop® to much applause in conferences where he talked about his concepts of Better Code. However, even though the technique he presented is powerful, it’s only one more different way from the design pattern technique of using Memento to store the state of objects before applying Command. There are other ways to write undo–redo systems, such as:

  • Save the state after every change to a different location. It’s dumb but works, so it should not be ignored.
  • Make your commands fully reversible. That is, they are never destructive. This isn’t always possible (for example, when creating, then referring to the created, it can get complicated to undo then redo over sequences), but when it is, it’s a choice you could make.

But here’s the problem: when people ask about a design pattern for undo, most are told how to use Command and Memento. However, the undo pattern doesn’t rely on those patterns at all. The undo pattern is simply about trusting that your actions are steps that can be undone and redone without fear of losing valuable data. There’s nothing inherently command-based about the steps. Nothing requires you to remember the way to extract some state from anywhere. The pattern merely asks you to ensure the current state of a document is recoverable after you take a step. In addition, if you undo the step, you have an opportunity to retake the same action in the same way. We want to redo the act with the same preferences, brush strokes, and key presses.

Consider also that when you only store the user’s actions, you begin to see how you can track commands in a way that they become macros. Vim users are well aware of this way of building up operations. Each action is usually an elementary key press or sequence of key presses building up a much larger expressive language.

Missing the bigger picture

But worse than this, addressing the problem of an undo pattern by suggesting Command and Memento skips over aspects of the real problem. Missing from the combined pattern solution is how an undo pattern should include whatever matters to the user’s mental model and nothing else. What’s also missing is the modality of the undo operation.

Ignoring the aspect of the user ignores which steps are considered steps and which are considered fleeting actions that do not change the document in a meaningful way. In such a system, macros may expand into actions, and each action would undo in turn, even though the user only took one action.

Consider modality, such as when an undo system keeps track of all commands, including selection and tab switching. In that system, it might become impossible to undo changes in one document while leaving modifications in a second unaffected. Imagine opening your favourite text editor and finding the undo buffer applied to all open documents, not just the one currently in focus. So, for an undo pattern, you need the undoing context, and that’s not covered at all by the pairing of Command and Memento.

If you hadn’t noticed, the latter anti-pattern is present in the Windows file system undo mechanism.

  1. Open two explorer windows.
  2. In the first, make a folder and name it.
  3. In the other, open a different folder and create a file there.
  4. In the first window now, undo.

Notice how it destroys the file rather than revert the folder rename. Not what you expected?

Succession

Now, let’s talk about Chain of Responsibility. This pattern was superseded by how most web server routing software handles requests. The request–response pattern extended Chain of Responsibility considerably, with the intermediate state travelling up and down the chain. This pattern has been repeated in several places, but I didn’t find any reference to the Chain of Responsibility pattern in any documentation about it. I also did not see a reference to it as a pattern in books contemporary6 with the design.

It’s as if the pattern of request–response arrived, was assumed natural, and ignored. It’s a useful pattern and should be understood so that it can be used outside the scope of web services. It’s a Chain of Responsibility with not only a hierarchical namespace as the key to who should be responsible, but also intermediary states affecting the message responded to. Outside of web requests, the routing aspect of the pattern can be seen in some messaging libraries where subscribers can subscribe to path patterns.

A very close relative of Chain of Responsibility is the bubbling and capturing event handling in web browsers. Bubbling maps almost perfectly to the pattern but adds an external manager responsible for the propagation. Capturing offers the opposite, where an outer object captures an event before it is passed down the hierarchy to its children. But again, I have not seen reference to the GoF pattern in any JavaScript or web development literature.

Almost true

The poison here is often not how the patterns themselves are harmful but how they almost fit so many problems or hide better patterns behind a famous almost-solution. Their solutions are not only applied when they don’t fit but also when they mostly fit but something else would have been better, more flexible, and less complex and could have opened up more and different opportunities.

Another poison comes from how some incorrect patterns have remained prominent and bring their world view with them. A good story is always true, but only in the world it brings with it. When we see patterns such as Compensate Success, we see examples of a fictional world where people work harder and do better when you pay them more. And we believe in that world because the story is good and makes sense. But repeatedly stating the same hopeful statements doesn’t make them true—it only makes them believable.

The Gang of Four wrote that their book offered a way to improve our ability to talk about the larger elements of our software by introducing terms to use as a language. They were partially correct. They introduced a language to communicate common ideas well and efficiently. But as with all things to do with efficiency, there was a negative impact on effectiveness. When we use the language they provide, we walk a narrow path. When we walk the path often enough, we start to believe it is the only valid route.

1

Yes, it’s spelled incorrectly. But that is how it’s published in the book.

2

I also found this pattern language in the Patterns Handbook[PH98]. It was reproduced in the 2004 book, Organizational Patterns of Agile Software Development[OPoASD04] with two stars of confidence (the Alexandrian notation for extreme confidence the pattern is always present to some extent in all wholesome solutions for the problem), and referenced again in 2019 in A Scrum Book[AScrumBook19].

3

As we develop new techniques, we see new problems we need to solve, such as screen-space reflection artefacts, handling audio in VR, developing an element of self-doubt in AI language models, and finding a non-destructive financing model for search engines and social media.

4

Not one publication, but a series of hypotheses over time across Franz Boas, Edward Sapir, and Benjamin Lee Whorf, to a principle of linguistic relativity. The way words are constructed out of pieces and the construction rules inhibit or afford the ways of perceiving, thinking, using, and extending both words and their objects. This subject is far too large for a footnote.

5

Better Code: Runtime Polymorphism - Sean Parent, from NDC {London} January 2017 https://www.youtube.com/watch?v=QGcVXgEVMJg

6

I did not spot it when reading Enterprise Integration Patterns[EIP04], but I could have missed it.