Poisoned
Systems are not agents and never act decisively. Rather than cause immediate catastrophic failure, they introduce a slow death to anything threatening their existence. Over the last 20 years, the established software development world’s defence mechanisms poisoned the meaning of design patterns. The reduced efficacy brought less attention, lessening the need for rigour or depth of analysis. With weaker patterns accepted and published, the validity of the design pattern concept degraded further. The masses of patterns beyond those in the GoF book[GoF94] became a swamp, safe to navigate only for the experienced pattern enthusiast. Many fantastic patterns were drowned out in the mire of poorly conceived and sometimes flat-out incorrect anti-patterns or fabrications.
In the early
The pattern begins by describing the problem of providing ever-increasing simulation fidelity while running on the same hardware. The forces at play are a demand for further features while retaining system responsiveness. The solution presented is one where the developer abstractly reveals the features’ impact (the runtime cost). The end user is then expected to make informed decisions about which features they want by picking them from this menu of items while not needing to be technically savvy.
Suspicious lack of insight
In addition to the problem of adding some complexity to support the configurability, the pattern suffers from having only a single recorded appearance. It does not appear to refer to any other found examples and has no constraints. If it were a pattern, we should expect some reference to how others solved the problem in their context. As it is, the pattern simply suggests the decisions can be given to someone else to worry about. Rather than solve the problem, make it someone else’s problem.
If you have played computer games on both PC and console, you might recall a difference between those two environments concerning fidelity. Console games do not generally make graphics settings the problem of the player. Only with the advent of modern consoles and 4K HDR TV has the option become relatively acceptable to present to the player, and even then, only if the game has a broad enough demographic to garner a genuine benefit related to player preference.
Adapting fidelity in real-time can drive level-of-detail systems and upscaling stages to handle highly complex scenes while not dropping frames. We can do this per frame, which is much better than asking players to balance their diet. We see this superior pattern play out in productivity software where the software reduces fidelity while elements move, such as when adjusting filters and effects in a photo editing application. As I think about it, I realise there’s an element of this pattern in video streaming and autocomplete for web searches; the quality of the result in both cases is lower, but the latency and availability is considerably better.
For anyone working within such a domain, the pattern misses obvious recurring bugs or
problems. This suspicious lack of problems leads to their being no wisdom but
also no authority. This lack of insight can be one of our metrics for
rejecting a pattern. We come up against this lack of conflict again later with
the
So,
Provably false
Another pattern from the first PLoPD book2 was in ‘A Generative
Development-Process Pattern Language’. Pattern number 42 is
Schedule motivations tend to be self-fulfilling: a wide range of schedules may be perceived as equally applicable for a particular task. Schedules are therefore poor motivators in general. Altruism and egoless teams are quaint, Victorian notions. Companies often embark on make-or-break projects; such projects should be managed differently from others. Disparate individual rewards motivate those who receive them, but they may frustrate their peers.
— James Coplien,
Pattern Languages of Program Design [PLoPD95], p. 234.
I agree with the first point, about schedules not being good motivators, but less so with the others. And these all still feel like context to me. They set the scene, but what’s the force? If we try to turn this into a pattern, it must resolve something unresolved. Perhaps the force is the desire of an organisation to get the best performance out of the individual developers. I can get behind this as a problem worth attempting to solve.
So, it’s still a problem of motivation, but at least we’re stating it clearly. Now, it’s become evident that the organisation has a different goal from the individual. Otherwise, why would the organisation think it needs to motivate the individual?
The second and last points are highly problematic, but there are
references to support the claim, so perhaps I’m wrong? The first is
Given a number of alternative levels of behaviour (ten, fifteen, or twenty units of production per hour, for example), an individual will choose the level of performance which has the greatest motivational force associated with it, as indicated by a combination of the relevant expectancies, outcomes, and values.
— Edward E. Lawler,
Pay and Organisation Development , p. 20.
Quaint Victorian notion
When was the last time you considered performing? Have you ever considered putting in more or less effort? Based on reward or otherwise? Or, like most software developers, do you work as best you can because the real reward is completing the work? The intrinsic reward for being good at what you do, knowing you positively impact your community, and perhaps gaining a little self-improvement? The only externally provided reward I cherish is the opportunity to do things my way.
The pattern appears to be backing up its claim based on evidence gathered from factory and plant workers. Physical labourers might increase performance when you compensate them with additional pay, but I do not believe software developers are labourers.
The second reference is no better, citing:
Organisations offer rewards; individuals offer performance.
— Ralph H. Killman,
Beyond the Quick Fix [Kilmann84], p. 229.
What makes this evidence even more inapplicable is how it even steps on its own toes when later claiming that intrinsic rewards are valuable, but extrinsic (employer-given) rewards seem to cause problems.
For example, if employees’ paychecks are out of line with what they believe they deserve, they become very upset. If employees feel that others are getting more pay for doing less work, they become angry.
— Ralph H. Killman,
Beyond the Quick Fix , pp. 233–234.
The work by Daniel Pink on drive in
We can also see that extrinsic rewards do not make the world more wholesome by the observation that bonuses are natural Cobra Effect triggers. Motivating your high-performance individuals is not a job of cash or celebration but one of understanding what drives them and giving them more of it. Herzberg’s two-factor theory suggests money does not motivate; it only demotivates by absence.
Daniel Pink’s
At this point, I claim the present form of
Upon review, the pattern language fits a specific environment. I recognise it from first-hand experience. An ego-driven developer without appreciation for the complexity of a large, healthy organisation would feel comfortable with these patterns. I can see why this pattern language might have been published. Many of the reviewers likely felt at ease with the prescriptions. Most software developers likely conform to this description. This person could have been myself within the first decade of my career.
Diluted
With the patterns now diluted to a collection of mixed quality and credibility, the GoF book started to look like a source of consistency. The development world was in the middle of its object-oriented frenzy, with nearly every mainstream language being object-oriented or providing direct support for the paradigm. The solutions found in the GoF book applied to most developments because they were fundamental and paradigm-oriented rather than problem-centric. But with that came the inversion—the solution in search of a problem.
Developers early in their careers were tempted into believing there was a pattern for every concern. They would ask which design pattern they should use to solve their problem. In a world where all the design problems of programming are known or even knowable, this might have been a good practice for beginner programmers. They should not need to solve unsolved problems so early in their career. However, we do not live in such a world. Programming itself has not been around long enough to have most of its problems solved. Looking around at the state of software development, we’re learning about more things we haven’t yet solved3 faster than we’re solving the things we already know about.
Only having 23 famous and credible patterns means there won’t likely be a pattern to solve your problem. The question then posed becomes a mind-narrowing activity. The developer stops thinking about the situation but starts looking to see which of the 23 solutions fits it best. Thus, we end up with misused patterns, which deal a blow either to their credibility or the developer’s capability. Neither of which is desirable.
Design patterns, or at least the way they are portrayed, are also toxic, blocking ingenuity in a different way. The Sapir–Whorf hypothesis4 explains how the structure of the language we use and the idioms we live with help define what kinds of nuances we can communicate. As invention is often the combination of multiple ideas in our minds and making new connections, being unable to think about things as different from each other or from a new perspective becomes stifling. If you know a pattern matching a problem and the language overlaps your use case, you’re likely to assume the pattern is suitable and try to find a way to bend it into your problem space to implement a solution. If your language includes these patterns, there’s inertia against thinking about an alternative pattern that solves the problem more elegantly.
So, as an example, let’s talk about the
- It locks us to a single class (which hurts testing by making unit tests harder to isolate and mocks harder to inject).
- It provides a creational behaviour, creating the object just in time,
- which affects runtime performance because construction is not free, but when it happens, no one knows;
- thus there is no decided order of construction, making initialisation less well defined, and some orderings may hide bugs; and
- they are potentially never destroyed, meaning clean-slate tests can’t work, and we can’t be sure we’ve freed all our resources because we never naturally achieve complete shutdown.
Notice how solutions involving a
Another implementation of
Okay, enough of the
One of my favourites to point out would be
- Save the state after every change to a different location. It’s dumb but works, so it should not be ignored.
- Make your commands fully reversible. That is, they are never destructive. This isn’t always possible (for example, when creating, then referring to the created, it can get complicated to undo then redo over sequences), but when it is, it’s a choice you could make.
But here’s the problem: when people ask about a design pattern for undo, most
are told how to use
Consider also that when you only store the user’s actions, you begin to see how you can track commands in a way that they become macros. Vim users are well aware of this way of building up operations. Each action is usually an elementary key press or sequence of key presses building up a much larger expressive language.
Missing the bigger picture
But worse than this, addressing the problem of an undo pattern by suggesting
Ignoring the aspect of the user ignores which steps are considered steps and which are considered fleeting actions that do not change the document in a meaningful way. In such a system, macros may expand into actions, and each action would undo in turn, even though the user only took one action.
Consider modality, such as when an undo system keeps track of all commands,
including selection and tab switching. In that system, it might become
impossible to undo changes in one document while leaving modifications in a
second unaffected. Imagine opening your favourite text editor and finding the
undo buffer applied to all open documents, not just the one currently in focus.
So, for an undo pattern, you need the undoing context, and that’s not covered
at all by the pairing of
If you hadn’t noticed, the latter anti-pattern is present in the Windows file system undo mechanism.
- Open two explorer windows.
- In the first, make a folder and name it.
- In the other, open a different folder and create a file there.
- In the first window now, undo.
Notice how it destroys the file rather than revert the folder rename. Not what you expected?
Succession
Now, let’s talk about
It’s as if the pattern of request–response arrived, was assumed natural, and
ignored. It’s a useful pattern and should be understood so that it can be used
outside the scope of web services. It’s a
A very close relative of
Almost true
The poison here is often not how the patterns themselves are harmful but how they almost fit so many problems or hide better patterns behind a famous almost-solution. Their solutions are not only applied when they don’t fit but also when they mostly fit but something else would have been better, more flexible, and less complex and could have opened up more and different opportunities.
Another poison comes from how some incorrect patterns have remained prominent
and bring their world view with them. A good story is always true, but only in
the world it brings with it. When we see patterns such as
The Gang of Four wrote that their book offered a way to improve our ability to talk about the larger elements of our software by introducing terms to use as a language. They were partially correct. They introduced a language to communicate common ideas well and efficiently. But as with all things to do with efficiency, there was a negative impact on effectiveness. When we use the language they provide, we walk a narrow path. When we walk the path often enough, we start to believe it is the only valid route.
Yes, it’s spelled incorrectly. But that is how it’s published in the book.
I also found this pattern language in the
As we develop new techniques, we see new problems we need to solve, such as screen-space reflection artefacts, handling audio in VR, developing an element of self-doubt in AI language models, and finding a non-destructive financing model for search engines and social media.
Not one publication, but a series of hypotheses over time across Franz Boas, Edward Sapir, and Benjamin Lee Whorf, to a principle of linguistic relativity. The way words are constructed out of pieces and the construction rules inhibit or afford the ways of perceiving, thinking, using, and extending both words and their objects. This subject is far too large for a footnote.
Better Code: Runtime Polymorphism - Sean Parent, from NDC {London} January 2017 https://www.youtube.com/watch?v=QGcVXgEVMJg
I did not spot it when reading