January 28th, 2026
More Computing Power Is No Substitute for System Architecture
By Uwe Jesgarz, Managing Director, Kithara Software
When technical systems reach their limits, the typical reflex is the same: move to faster hardware. More cores, higher clock speeds, more powerful accelerators. The decision appears rational, measurable, and comparatively easy to implement.
In many cases, however, it is still the wrong one.
Not because computing power is unimportant, but because it is often used as a substitute for something that cannot be retrofitted: a clear, systematically designed structure.
Performance Is a Resource – Architecture Is a Decision
Computing power is a resource, comparable to the amount of RAM or storage capacity. System architecture, by contrast, is a fundamental decision that determines how a system functions. Resources can be scaled; architecture cannot – at least not retroactively and not without consequences. This is where a widespread misconception arises: the assumption that additional performance can compensate for structural deficiencies. It may work in the short term, but over time it tends to aggravate the problem. Computing power does not resolve a lack of structure – it merely conceals it.
Uncontrolled Concurrency Remains Uncontrolled
Modern systems are almost always concurrent. Multiple tasks run simultaneously, access shared state, and respond to events whose timing is not deterministic. This concurrency is not created by hardware, but by design decisions. Where these decisions are not made consciously but emerge inadvertently, control is lost.
If there is no explicit temporal structure governing all processes, system behavior remains implicit – that is, vague and undefined. Dependencies are not clearly specified, states are not unambiguously defined, transitions are not controlled. The conclusion is straightforward: uncontrolled concurrency remains uncontrolled – regardless of how fast the hardware may be.
Additional performance merely enables the system to move through this disorder more quickly.
Why This Misconception Is Particularly Common Today
The misconception is reinforced by modern platforms. Increasingly powerful multicore processors, cloud infrastructures, and scalable runtime environments create the impression that performance is virtually unlimited. When a system reaches its limits, it is scaled – vertically or horizontally. This is convenient – and it postpones addressing the underlying issue.
The more readily available computing power becomes, the less frequently it is questioned why the system needs it in the first place. Structural questions are deferred because the symptoms initially disappear. The problem, however, is not solved; it has merely become less visible.
When Errors Become Rarer – and More Expensive
A system lacking structural order does not reveal its weaknesses immediately. It may appear stable for a long time, especially under laboratory conditions or moderate load. The problems tend to surface later:
- under specific load conditions
- in rare combinations of states
- in the field rather than in testing
At that point, they are:
- difficult to reproduce
- difficult to explain
- expensive to analyze
The system appears reliable … until it no longer is.
Speed Increases Throughput, Not Control
A common misunderstanding is to equate speed with control. Computing power and speed increase throughput. They do not increase predictability.
A system does not become more deterministic simply because it has more performance. It merely becomes indeterminate at a higher rate. Temporal errors do not disappear; they become statistically less frequent and therefore more dangerous. When a problem eventually occurs, troubleshooting can easily consume man-weeks or even months. A fast system without temporal order is therefore not a high-performance system, but a risky one.
The Question Rarely Asked
In many projects, there is extensive discussion about how much computing power is required. Far less often is the question raised which temporal guarantees the system must fulfill. Yet this is precisely the decisive issue:
- What is allowed to happen, and when?
- Which states are valid – and for how long?
- Which events take precedence?
- What subsystems constitute the overall project?
- How do they interact with one another?
These questions cannot be answered with faster hardware or compiler optimizations. The architecture of a project begins where firm decisions are made.
A Different Measure of Progress
Technical progress is often equated with increase: faster, larger, more parallel. For complex systems, this is an insufficient metric. Truly advanced systems should instead:
- remain explainable
- respond reproducibly
- behave predictably under load
Those who attempt to overpower problems with computing power choose to forgo understanding. And without understanding, there is no control.
