There's a phrase that turns up in almost every digital project conversation, usually when someone wants to close down a debate rather than have it.
"We should follow best practice."
It sounds reasonable. It sounds responsible. It sounds like the kind of thing you'd want your development partner to say. And in a narrow technical sense - secure authentication, proper test coverage, sensible version control - it often is. There are patterns in software development that are simply correct, and you should follow them.
But sometimes we think "best practice" can ooze beyond its useful territory. It can become a way of dressing up convention as expertise. A rhetorical move that substitutes a general rule of thumb for specific, contextual thinking. And in our experience, expensive project decisions often come dressed up in “best practice” clothing.
Why best practice is not always “best”
Best implies there's a ranking. That somewhere, someone has evaluated all the options against all the possible contexts and arrived at a definitive answer. In software development, that's almost never true. Best practice in software development refers to commonly accepted ways of building systems. But these are not always the right choice. The best approach depends on context, tradeoffs, and what you are trying to optimise.
What's actually meant, most of the time, is "common practice", which is a very different thing. Common practice means it's what a lot of people do. It means there's documentation. It means the team won't have to think too hard because they've seen it before. That absolutely has value. In fact, we’d argue it’s usually a really good starting place. But it's not the same as "best for your situation."
The more honest framing is that every technical decision is a tradeoff. You're not choosing between a good option and a bad one you're almost always choosing between options that are good and bad in different ways. Evaluating context, probing for whats “most right” in light of what we know about your current needs and future aspirations.
There's a wrinkle worth naming here, too. AI tools have made this problem subtly worse. Research from Thoughtworks on two years of AI-assisted development found that when an AI assistant describes something as "best practice," engineers are measurably more likely to adopt it without interrogating the context a textbook framing effect. The polished, confident output of a large language model turns out to be remarkably good at discouraging the follow-up questions that great engineers are relentless about asking. So just at the moment when AI is accelerating how fast we ship, it's also quietly reinforcing the habit of reaching for convention rather than thinking from first principles. Something to sit with.
The technical tradeoffs are the same kind of problem at a smaller scale, microservices give you independent scalability; they also give you a distributed system you now have to manage. A highly normalised database schema protects your data integrity; it also means more complex queries at scale. An abstraction-heavy codebase is easier to change; it can also be slower and harder to reason about in a pinch.
None of those tensions resolve cleanly. The only honest answer to "which is better" is: it depends what you're optimising for.
How context changes technical decisions
The reason "best practice" is so persistent is that it offers something genuinely appealing: a shortcut past the hard thinking. And sometimes, under time pressure or in familiar territory, that shortcut is fine.
But we work in complex environments where the shortcut is often wrong. National data platforms. Compliance-heavy regulatory systems. Applications embedded in critical operational infrastructure. Business processes serving diverse user groups across multiple agencies or regions.
These projects don't share the same answers to even basic questions. An architecture that's sensible for a fast-moving startup product would be reckless in a government-grade data environment. A development pattern that serves a small, experienced team well can become a liability when you're handing something over to an internal team who'll own it for the next decade.
The right question isn't "what does everyone else do?" It's "what does this situation actually call for?" which requires understanding the problem in enough depth that you can evaluate options honestly, including the option to do something unusual if the context warrants it.
A concrete example: we support an aging government application, a repository for a significant quantity of highly niche data, the app has lacked an agency sponsor for several years. Useful life is highly uncertain, budget permission and stakeholder decision-making power are very limited. Investment appetite is low. But nobody wants to make an “end of life” call. “Best practice” would be to proactively keep software versions current, run proactive infrastructure improvements, manage ongoing cost and risk. In this case, the contextual answer looked like undertaking a thorough risk assessment and targeted mitigation activities only plus surfacing clear advice on risks and options. Nothing more. That's not a failure of ambition or a failure to adhere to “best practice” it's the right call for the particular situation.
That's way harder than reaching for a rule. It's also the actual job.
Examples of software tradeoffs in practice
It shows up in methodology too.
There's a version of digital delivery where every project gets the same sprint length, the same ceremonies, the same documentation requirements. Reliable. Easy to resource. Defensible in a proposal.
Also optimised for the vendor, not the client.
The engagements we find work best are the ones where the process is assembled from principles rather than templates. A short, focused project with an experienced client team needs something different from a multi-year transformation with complex stakeholders and evolving requirements. Apply the same framework to both and you'll get consistent processes and inconsistent outcomes.
We work from a toolkit of approaches we've tested and trust, and we sequence them to fit the reality of the engagement. We’re also prepared for them to change mid-project. On longer running engagements the level of process overhead we start with - is rarely what we end with. If we’re working with a large public-sector partner with dedicated resourcing, the expectations and process load look very different to helping a start-up founder get their product production-ready and in market under tight timelines. It’s essential that the process has room to change with the nature of the delivery - and the capabilities of the stakeholders we’re working with. This flexibility isn't a lack of rigour. It's rigour applied at the right level.
How to choose the right approach
One thing that's helped us hold ourselves accountable to this kind of thinking: we document the reasoning behind significant decisions, not just the decisions themselves.
Questions like "why did we do it this way?" usually surface about six months to a year after a project milestone. In our experience, the partner-side team has often changed by then, the context has faded, and the original logic has evaporated. A clearly written decision record here's what we were weighing, here's what we optimised for, here's what we knowingly accepted as a downside keeps that reasoning available. It also creates a useful check at the time of the decision: if you can't articulate the tradeoff, you probably haven't thought it through.
For our partners, this transparency matters in a different way. It means they can see our working. They know the choices being made on their behalf aren't reflexive they're considered. And if circumstances change and a decision needs revisiting, there's a clear basis for that conversation.
What we’d like to say instead
We’re trying to shift our language and thinking internally to “leading practice”.
The shift is subtle but meaningful. "Leading" acknowledges that the right approach is always in motion, that judgment is required to find it, and that someone needs to be doing the thinking rather than deferring to convention. It makes space for a team to say "the common approach doesn't fit here, here's what we're doing instead, and here's why."
That kind of reasoning is what we expect from our senior people. And it's what we'd encourage our partners to expect from any development partner they choose to work with.
What to look for in a development partner
If you're evaluating development teams, this is worth probing in conversation.
Ask them to explain a decision they made that went against the grain, a place where they chose a less common path because the context demanded it. Ask how they document significant architectural decisions and who can see that reasoning. Ask what happens on their projects when circumstances change mid-build and the original approach needs rethinking.
If the answers are clear, specific, and show genuine engagement with the tradeoffs involved you're talking to people who are actually thinking. If the answer is a variation of "we follow industry best practice," that's a signal worth sitting with.
The best work we've done has rarely come from applying a standard playbook. It comes from understanding our partner’s world in enough depth that we can make decisions that fit their situation - and from being willing to say "this is unusual, and it's right" when it is. Software development best practices can be challenged when appropriate.
In software development, best practice is a starting point, not a rule. The right decision comes from understanding context, tradeoffs, and long-term impact.
Build the Right Thing. Not the obvious thing. Not the comfortable thing. The one that's right for here, for now, with what we actually know.