This content originally appeared on Level Up Coding - Medium and was authored by Konstantin Maliuga
Imagine building a bridge to cross a puddle. Engineers would laugh, but in software we do this daily. Teams often rush to solve problems using code before questioning whether the problem needs solving or if it’s even the right problem.

In critical systems — like banking, healthcare, or infrastructure — unnecessary code isn’t just clutter. It’s a risk. Every new line is a potential bug, maintenance burden, and complexity. Sometimes it can even create new problems that further distract from the original goal.
The best solutions often start with questioning the problem itself. Sometimes, the right answer is to write no code at all.
The Cost of Unnecessary Code
Prototyping and writing code is fun and easy. Maintaining it can become a difficult chore. Let’s go through some of the potential problems that can be caused by the introduction of new application logic when developers are too eager to jump to implementation:
Complexity growth
Every new line of code added increases the complexity of the system. Usually it’s not a dramatic change, but complexity tends to stack up: one line in a new project does one clear thing, but in an existing code base it might interact with different context variables, depending on the quality of the code. Then how easy will it be for a new person in a project to understand all these interactions? How many context variables should be kept in one’s head to prevent introduction of a bug? Essentially, every new piece of logic is something we might have to consider tomorrow in addition to the already existing complexity of it.
There are good practices for keeping both essential (domain) and accidental (engineering) complexity in check (e.g. DDD, Hexagonal architecture, simple abstractions etc.). But they still require knowledge and effort to maintain simplicity of the context when making changes, and then revisiting and further simplification when domain matures or pressing deadlines are met.
Opportunity Cost
Adding new logic is not only the fun of code writing, that can take between five minutes to N days, it’s also impact analysis, testing, deployment, potential iterations and communication. If we are busy with this, we don’t do something else.
Risk Amplification
New code changes = new risks. It’s obvious that logic changes might introduce bugs: the new logic might not work as developer intended, or the existing flows can become broken,. Other sorts of things can happen as well: a compliance gap introduced, performance bottleneck created, or a new attack vector emerged. In any case sometime changes bring more harm than good.

How Exactly Do We Overcomplicate
Software engineers are problem-solvers by nature, but it’s a double edged sword sometimes. Especially when we don’t take time to reflect on what we do.
More often than wanted, engineers are solving symptoms, not root causes. It happens especially often when teams or individuals are handed a task, and they jump right to the solution.
An example of this could be a system that was sending emails to users when certain changes in their accounts were made. A problem was identified when developers realised that not only can users make these changes, but internal systems can too. This resulted in users receiving confusing emails saying “…if you didn’t make this change then contact us…”, when they certainly didn’t and that was expected. Fine, “let’s filter out changes, requested by users from changes made by systems” the team said. Sounds logical, but requires changes in a few places across a number of the components — for better agent tracking and then for notifications.
But what is the actual problem that had to be solved: do we want to provide nice emails, or do we want to prevent unauthorised access? What if instead of working on the emails, which are sent when it’s already too late, we start using the already available APIs to perform additional 2FA in the account changes flow, making sure it’s the actual owner who makes changes. It’s solved the actual problem and turned out to be much easier to do.

Not only might the problem be misidentified. Oftentimes it’s the way to approach the problem. The first solution that engineers come up with often becomes their favourite, and then they start building. We then fell in love with this plan, especially when it’s not obvious. “It’s going to solve the problem in a such smart way!” we think.
Let’s say we need to decide on switching between synchronous and asynchronous processing for updates to a certain object stored in a DB. Too often synchronous updates lead to high contention that we are trying to avoid.
So the first idea that comes to mind is to switch to asynchronous processing when the rate of updates is too high — let’s track the rate then! We’ll set up a rate tracker. Then we’ll tune it. Well, the updates might come to different instances of our service, and we need to account for the scalability factor too… So, it has to be a distributed counter then, or the load balancer has to route based on custom ID specific logic.
What if we just step back before implementing anything. A rate tracker might be a good idea, but it requires configuring multiple parts and writing new code, hence intrinsically carries higher risk of bugs, higher time consumption etc. What if we start simple instead. What if we check the last updated_date timestamp on the object that we fetch from the DB in any way, and just enable async processing if the timestamp was updated too recently when the next update arrives. It’s a naive approach, but can be implemented with one line of code. It turned out that the naive approach was enough and prevented complicated and time consuming changes that the team wanted to jump to from the beginning.

Lastly, as software engineers, we don’t always prioritise well. This is something where it’s always possible to learn something new and become bit better. But sometimes even with a known set of problems we are spending time on those that might be not the most important to focus on at the moment.
This one happened with me: I was designing sharding support in a set of existing services. And it was important to make sure the sharding approach is supported on a platform level, to be easily replicable in other services when becomes needed. It required considering a lot of different variables, and there were multiple potential ways to solve the problem. It was important to settle on sharding strategy, shard keys choice, on which level the implementation should reside etc. — something that Jeff Bezos calls one way door decisions [Amazon’s Day 1 culture — Make high quality, high velocity decisions]; once settled it’s very hard to turn back. After solving some of these problems I also spent a few days providing the exact algorithm for partitions move between shards, even after making sure that it’s clearly possible, and that the way to do that is just a matter of implementation.
The problem was that we didn’t even need to move any partitions between shards any time soon, we needed a way to start writing to multiple PostgreSQL shards with our custom sharding strategy. But because this move would become needed at some point, and it was in the scope of the work of sharding, I easily forgot that time might be much more efficiently spent if I had switched to what was a priority in foreseeable future: other one way door decisions or even start building a POC to prove everything researched so far actually works together.

When to Write Code (and When Not To)
We still need to code. New products and important changes won’t implement themselves. Then how to distinguish that it’s time to write code from when it’s not?
Keep asking “Why?” until the problem cracks
When tasked with solving a problem, dig deeper and try to understand the whole scope of it. As in the example of security emails sending on internal changes:
- “Why do we need to send these emails?” → To react on unauthorized changes.
- “Why do unauthorized changes happen?” → Weak security checks.
- Then let’s solve the root cause and improve the security checks, not just implement the surface ask.
Prioritise the right problems, think what happens if we don’t solve this problem now
Not all problems are of equal importance. Some have a priority for clear reasons, others can wait, or their resolution is more expensive than the gained benefit.
During the sharding project, I prioritised data migration (a future problem for the time when other concerns are resolved) over validating the core ideas (a potential blocker for the whole project). Just thinking twice of what is a priority and what are the milestones to focus on would’ve helped to deliver the project quicker.
Know your environment: can the problem be solved using the existing tools instead of building a new perfect instrument?
This point is important especially for platforms and core systems. Building something ad-hoc is sometimes not only just opening loopholes, but also wastes time.
For the async processing example, the team nearly built a custom rate counter instead of relying on the already available updated_date timestamp. The result? The simple solution is working as perfect with almost no code changes.

Less Code (that we don’t need), Better Systems
Someone might read all this and say “well, if we don’t react quickly then we become slow and eventually lose competition with other companies on the market”. And I would even agree, if such quick actions were always applied in the right direction, were well justified means and sometimes wouldn’t bring more problems than benefits.
So to be quick and efficient, let’s just make sure that when approaching a problem, before jumping to coding, we can fully understand exactly why it should be solved, why doing this is more important than working on something else, and whether the way to resolve this problem is optimal.
Because sometimes less is more.
The Best Code is the Code You Don’t Write: Avoiding Overcomplication in Critical Systems was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.
This content originally appeared on Level Up Coding - Medium and was authored by Konstantin Maliuga

Konstantin Maliuga | Sciencx (2025-03-07T01:30:22+00:00) The Best Code is the Code You Don’t Write: Avoiding Overcomplication in Critical Systems. Retrieved from https://www.scien.cx/2025/03/07/the-best-code-is-the-code-you-dont-write-avoiding-overcomplication-in-critical-systems/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.