Docker: A Software Engineer's Solution to a Problem That Didn't Need One
Posted: 3/3/2026 5:45:50 PM
By: PrintableKanjiEmblem
Times Read: 116
0 Dislikes: 0
Topic: News: Technical

By a Systems & Software Engineer with Decades in Both Worlds — And a Lot of Patience That's Finally Run Out

There's a conversation that keeps happening in engineering teams across the industry, and it usually goes something like this: a developer insists everything needs to be containerized, a systems engineer quietly questions why, and Docker gets adopted anyway — often making everything more complicated than it was before.

I've worked in both worlds — software and systems — for decades. And the more I see Docker used in the wild, the more I'm convinced that in the majority of cases, it's the wrong tool applied to a problem that either didn't exist or was already solved by people who actually understood the underlying infrastructure. It's been adopted by half-assers who didn't understand the infrastructure they were deploying to, celebrated by an industry that confused novelty with progress, and now defended with the religious fervor of someone who has never had to clean up the mess it creates at 2am on a Sunday.

I'm done being polite about it.

The Problem Docker "Solved" Was Largely Self-Inflicted

Docker's origin story is seductive: "it works on my machine" — the eternal cry of the developer whose code behaves differently in production than in development. Docker promised to eliminate that gap by packaging an application and its dependencies into a portable container.

But here's what that story glosses over: the reason "it works on my machine" was never a serious problem in the first place: developers didn't want to learn how the target system worked. Classic half-asser thinking: don't solve the problem, just wrap it in something that makes it look like you did. I cannot tell you how many times I heard a half-ass developer try to tell me — when I was a systems engineer — "It's not my code, it's a problem with your system. You need to fix it." And then I'd go through the system, and as someone who also had software experience, dig through their code and find the problem and pin it right back on the developer. Boy, those half-ass guys really hate when that happens.

That was one of the perspectives that made me devote the last 22 years to pure software development — I can do it better than some lame-ass software-only guy, because I understand both the software and the systems. And I've been good at that. I've risen over the years from a pretty bad junior software engineer to a Senior Principal Software Engineer for a giant well-known company. The lesson? Learn both sides. There are no shortcuts worth taking.

Experienced systems engineers have been deploying software reliably for decades — long before Docker existed. They did it with proper service management tools like systemd, init scripts, Windows Server Services, and others. They did it with disciplined package management, well-defined dependency handling, and a genuine understanding of the operating system they were deploying to. They knew how to isolate services, manage resources, and harden environments. The knowledge was there. The tooling was mature.

Docker didn't emerge because systems engineering had failed. It emerged because a growing class of developers didn't want to engage with systems engineering at all.

What We Already Had — On Both Platforms — And Why It Was Excellent

Before the container evangelists rewrote history, both Linux and Windows had robust, proven toolkits for deploying and managing software. The tragedy isn't that one platform was better than the other — it's that developers abandoned mature solutions on both platforms in favor of a fashionable abstraction layer.

Linux: A Service Management Ecosystem That Works

On Linux, systemd is a sophisticated, battle-hardened service manager that handles process lifecycle, dependency ordering, resource limits via cgroups, structured logging via journald, socket activation, watchdog timers, and automatic restart policies. It covers an enormous amount of what Docker Compose tries to do — with none of the abstraction overhead and with full visibility into what's actually running on your system.

cgroups and namespaces — the very kernel primitives that Docker relies on under the hood — are available directly on Linux. If you need process isolation, you can configure it at the OS level without a container runtime in the middle. Systemd-nspawn gives you lightweight containers without the Docker daemon. LXC gives you full Linux containers with a sane management layer. These are native Linux solutions that systems engineers understood and could reason about clearly.

Package management on Linux — apt, yum, dnf, pacman — gave you reproducible, version-pinned software installation with dependency resolution. Combined with configuration management tools like Ansible, Puppet, Chef, or Salt, you had repeatable, auditable, idempotent environment configuration across entire fleets of machines. No image layers. No container registries. Just clean, version-controlled infrastructure code.

KVM and libvirt gave Linux rock-solid VM management for workloads that needed genuine isolation — full hardware-level separation, snapshotting, live migration, and decades of mature tooling. The Linux virtualisation story was already excellent before anyone said the word "container."

Windows: The Platform Everyone Forgot Was Brilliant

On Windows — and this is the part that gets criminally ignored in these conversations — we had an entire, mature ecosystem that worked extraordinarily well and still does.

Windows Services run in the background, start automatically on boot, restart on failure, integrate with the Windows Event Log, participate in the Security Account Manager, and can be managed remotely via the Services MMC snap-in or PowerShell. Configurable failure recovery actions. Precise service account permissions. Native Windows authentication integration. All of this reliably since Windows NT 4.0. Writing your software as a proper Windows Service rather than a user-space process is a far better solution than Docker's approach of "if it runs on my machine, it will run on any machine, without me having to figure out where I went wrong."

IIS — Internet Information Services — is one of the most capable, battle-hardened web and application servers ever built. Application Pools give you process isolation, automatic recycling, resource limits, and separate identity contexts for each hosted application. You can run dozens of isolated web applications on a single IIS instance with proper resource governance, mutual isolation, and centralized management. This is not a workaround. This is a first-class enterprise-grade feature that Microsoft spent decades refining.

Windows Server Failover Clustering gave you high availability with proper state management, shared storage, and automatic failover — without wrapping everything in a container and praying the orchestration layer handles it correctly.

COM+, MSMQ, WCF gave distributed Windows systems transaction management, message queuing, and service communication deeply integrated with the OS security model — with tooling that administrators actually understood.

And yet here we are, watching developers shove .NET applications into Linux containers running on Windows hosts via WSL2, with a Docker daemon managing network bridges through NAT, all because someone decided native Windows tooling wasn't "cloud native" enough. It makes my eye twitch.

The Common Thread

Both platforms — Linux and Windows — had the right answers already. systemd on Linux and Windows Services on Windows both give you proper service lifecycle management. KVM on Linux and Hyper-V on Windows both give you mature, well-understood VM isolation. Ansible on Linux and PowerShell DSC on Windows both give you repeatable, auditable configuration management.

None of this required Docker. All of it required engineering discipline — which, it turns out, is harder to sell than a docker run command.

The Real Costs Nobody Talks About

Docker's marketing emphasizes simplicity. The operational reality is often the opposite. Just because it's quick does certainly not mean it's the best solution. Do you just want to get it "out the door", or do you want to do it the right way? Always do it the right way.

Complexity That Compounds

Start with a simple containerized application. Now add networking between containers. Now add persistent storage. Now add service discovery. Now add orchestration because you have more than a handful of containers. Suddenly you're running Kubernetes — a system so complex it has spawned an entire industry of consultants, certifications, and managed services just to make it approachable.

You have replaced a service running under systemd or Windows Services with a distributed system management platform that requires specialists to operate. The overhead — cognitive, operational, financial — frequently dwarfs anything you saved by not learning how to write a proper deployment configuration. Hint hint, half-assers.

Security: A Shared Kernel Is Not Real Isolation

This point deserves more attention than it usually gets. Containers share the host kernel. They are isolated using Linux namespaces and cgroups — mechanisms that are powerful but not equivalent to the hardware-level separation a hypervisor provides.

Container escapes are a real, documented class of vulnerability. The Docker daemon traditionally runs as root, making a compromised container a potential vector for full host compromise. Yes, rootless Docker exists. Yes, seccomp profiles and AppArmor policies can harden containers. But the fact that you need those additional layers just to approach the baseline security of a properly configured VM should give you pause. If you're doing this, you might just be a half-assed developer.

A seasoned systems engineer would never accept "we share a kernel but use namespace isolation" as equivalent to "we have separate VMs." Compare that to a Windows Service running under a domain service account with precisely scoped Active Directory permissions, TLS certificates managed by your enterprise PKI, traffic governed by Windows Firewall rules, monitored by your existing SIEM integration, audited through the Windows Security Event Log, and patched through your existing WSUS or SCCM pipeline. One of these has a mature, auditable, enterprise-integrated security model. The other requires you to explain container runtime security policies to your auditors. If you're in a regulated industry — finance, healthcare, government — and you're choosing the container stack over the native Windows service model because it's "simpler," I genuinely worry about your compliance posture.

The Image Layer Problem

Docker images are built in layers, and over time those layers accumulate everything: base OS packages, application runtimes, dependencies, configuration files, sometimes secrets that were added and then deleted — but not really deleted, just hidden in a lower layer. Understanding what is actually inside a production container image, who built it, what vulnerabilities it contains, and whether the base image is still maintained is a genuine operational challenge. The auditability that systems engineers took for granted with traditional deployments becomes a significant effort in a containerized environment.

Stateful Workloads: An Uncomfortable Truth

Docker was designed for stateless, ephemeral workloads. Databases, message queues, file stores — anything with meaningful persistent state — are fundamentally awkward in containers. And that's exactly what the systems guys have to deal with when they're managing your poorly thought-out code. Volume management, data persistence across container restarts, backup strategies, and performance characteristics of containerized storage all require careful thought and often compromise.

Despite this, teams routinely containerize databases and then spend considerable time managing the consequences. A database running on a well-configured bare-metal or VM environment with direct access to storage and network is almost always simpler, faster, and easier to maintain than its containerized equivalent. DB performance is always the thing you do to avoid ending up with a badly performing app — let alone all the people who use databases with no relations and no indexes. It runs slow, and you half-assers show your ignorance to everyone.

Docker on Windows AND Linux: The Costs Are Real on Both Platforms

On Windows: A Compatibility Nightmare

Here's something the Docker advocates conveniently gloss over: Docker on Windows is painful, and the contortions required to make it work should be a red flag about whether it belongs there at all.

When you run Linux containers on Windows — which is what most people do, because the Linux container ecosystem is vastly larger — you are running a full Linux virtual machine (via WSL2 or Hyper-V) inside your Windows host, managed by the Docker daemon, with a networking stack that goes through multiple layers of translation before reaching your actual application. You have introduced an entire Linux subsystem into a Windows environment to avoid learning Windows Server. Stop and think about that for a moment.

Windows Containers — Docker's native Windows offering — exist, but they come with their own spectacular problems. Windows container images are enormous: multi-gigabyte base images compared to a few hundred megabytes for Alpine Linux. Windows containers have strict version compatibility requirements between the container image and the host OS. The Windows container ecosystem is dramatically smaller than Linux, so half the tooling you want simply doesn't exist or doesn't work properly.

So you've taken a mature, capable Windows Server environment and traded it for a fragile compatibility layer. And for this, you gave up native Windows authentication, seamless Active Directory integration, IIS Application Pools, the Windows Event Log, native PowerShell management, and an administrative toolset that your operations team actually knows how to use. Brilliant trade.

On Linux: You're Wrapping Solutions That Already Exist

Linux doesn't get a free pass either. The Linux container story is cleaner than Windows, but that doesn't make it the right answer. When you containerize a Linux service with Docker, you are wrapping systemd with something that is strictly worse at service management than systemd. You are adding image layers on top of a package manager that was already handling dependency management. You are introducing a container networking layer on top of a Linux networking stack that was already fully capable and that your operations team already understood.

The Linux ecosystem's native tools — systemd, KVM, Ansible, apt/dnf, cgroups configured directly — are mature, transparent, and powerful. A properly configured Linux service under systemd with cgroup resource limits is easier to monitor, debug, and secure than the same application wrapped in a Docker container. You can journalctl it, systemctl status it, strace it, and reason about it with standard Linux tooling. A containerized service adds layers of indirection that make every one of those operations more complicated than it needs to be.

The Linux container case is strongest for specific use cases — CI/CD, ephemeral tooling, genuine polyglot dependency conflicts. But even on Linux, defaulting to containers for everything is a discipline failure, not an engineering decision.

VMs: The Answer We Abandoned on Both Platforms for No Good Reason

The dismissal of virtual machines as "too heavy" is one of the laziest arguments in modern infrastructure discourse, and I am exhausted by it — on both Linux and Windows.

On Linux, KVM with libvirt gives you hardware-level VM isolation, live migration, snapshotting, and a management ecosystem that integrates cleanly with every monitoring and automation tool you already use. It is mature, well-understood, and battle-tested at enormous scale.

On Windows, Hyper-V is a Type 1 hypervisor built directly into Windows Server. A Windows Server VM is a full domain member — it participates in Active Directory, gets Group Policy, integrates with your existing monitoring, patching, and management infrastructure. It behaves like a Windows server because it is one. Spinning up a new VM takes minutes. Snapshotting before a risky deployment takes seconds.

Both platforms give you genuine kernel-level isolation. A compromised VM does not have a path to the hypervisor host through shared kernel vulnerabilities. A compromised container potentially does — because they share the host kernel. This is not a theoretical concern. Container escape vulnerabilities are documented and real.

The overhead argument also ignores hardware reality. Modern servers have hundreds of gigabytes of RAM and dozens of cores. The resource overhead of a lightweight VM on that hardware is rounding error. If you're running so many workloads that VM overhead is genuinely a concern, you have bigger architectural questions to answer than "containers vs. VMs."

The Cultural Problem: Half-Assers Colonizing Systems Engineering

I'll say the quiet part loud: Docker became dominant because developers decided they should own deployment without doing the work to understand deployment. Half-assers, the lot of them — reaching for a tool that let them skip the hard parts and still claim they'd "solved" the deployment problem. It's infrastructure colonialism: take the territory, discard the existing governance, and install your own system that you're comfortable with.

On Windows environments, this has been particularly destructive. Windows Server administration is a genuine discipline with deep expertise built up over decades. Active Directory, DNS, IIS, certificate management, Windows clustering, DFS, WSUS, SCCM — these are complex, powerful systems that skilled Windows administrators know intimately. When development teams bypass all of that by containerizing their applications and demanding Kubernetes clusters, they don't eliminate that complexity. They just move it somewhere neither team fully owns, strip away the Windows-native tooling that made it manageable, and call it "modernization."

The irony is that using Docker well — really understanding it, not just running images someone else built — requires solid systems knowledge. You need to understand Linux namespaces, cgroups, overlay filesystems, bridge networking, iptables rules, and kernel capabilities. The developers who reach for Docker casually are driving a car without knowing how an engine works. That's fine on smooth roads. It becomes a serious problem when something breaks.

That word "legacy" — applied dismissively to Windows Server tooling — is doing a lot of dirty work in this industry. It means "working, proven, and understood by people who make me feel bad about what I don't know."

Where Docker Actually Earns Its Place

To be fair: Docker is not useless. There are genuine use cases where it earns its complexity.

Reproducible CI/CD environments are one of the strongest cases. Ensuring that every build runs in an identical, clean environment across a heterogeneous fleet of build agents is genuinely hard, and containers handle it well. I cannot argue with that.

Ephemeral development and testing environments — spinning up a database or a third-party service dependency locally for a quick test — is a legitimate convenience use case where the operational overhead is low and the benefit is real. That's fine, unless it becomes the production solution.

Large polyglot service ecosystems with legitimately conflicting runtimes and dependencies can benefit from the isolation containers provide. But also ask yourself — as a developer or architect — is allowing multiple versions of many things the right solution, or just laziness? Don't be lazy. Something I find myself repeating to both developers and LLM coding assistants. Both are sneaky and will try to sneak in the easiest, worst solution. I always force my AI coding assistants to ask themselves: "Don't be lazy. Are you being lazy?"

The problem isn't that Docker has no valid use cases. The problem is that it became a cultural default — adopted reflexively, applied broadly, and rarely questioned — by teams who didn't have the systems knowledge to evaluate whether it was the right tool for their specific situation.

What You Should Actually Do Instead — On Either Platform

For most deployment scenarios, a more defensible approach:

1. Deploy as a native service. On Linux, run under systemd with proper unit files, cgroup resource limits, and journald logging. On Windows, write a proper Windows Service with a scoped service account and Event Log integration. Both are well-understood, easily monitored, and manageable with standard tooling. This is not hard — it's just discipline.

2. Use IIS Application Pools for Windows web workloads, nginx/Apache for Linux. On Windows, Application Pools have been providing process isolation, automatic recycling, and resource governance for twenty years. On Linux, a well-configured nginx reverse proxy in front of application processes managed by systemd gives you the same. Neither requires a container runtime.

3. Use VMs when you need genuine isolation. On Windows, Hyper-V. On Linux, KVM. Both give you real OS-level boundaries, a proper security model, snapshot/rollback, and something your operations team can manage with existing skills and tooling.

4. Use configuration management for repeatability. PowerShell DSC or Ansible on Windows. Ansible, Puppet, or Chef on Linux. Version-controlled, auditable, idempotent deployment without container overhead — on either platform.

5. Make developers learn the target environment. Whether it's Windows Server or Linux, software written with genuine understanding of where it runs is always better software. There are no shortcuts worth taking.

Conclusion

Docker is a tool. Like all tools, it has appropriate uses and inappropriate ones. The industry's wholesale adoption of containerization as a default — driven largely by developer preference rather than operational reasoning — has in many cases made systems more complex, less transparent, harder to secure, and more expensive to operate than the alternatives it replaced.

The engineers who built reliable infrastructure before Docker existed weren't doing it wrong. They were doing it with knowledge and discipline that the industry has increasingly treated as optional. Docker's real legacy may not be the technology itself, but what its popularity revealed: a widening gap between software development and systems understanding, and our collective willingness to build abstractions rather than bridge it.

The right solution to most deployment problems isn't a better container. It's an engineer who understands both the software and the system it runs on. I'm sick of encountering half-assers who get butthurt because I call them out on their laziness.

The next time someone tells you the solution is to containerize it, ask them if they've ever written a Windows Service. Ask them if they know what an IIS Application Pool is. Ask them if they understand the security model they're replacing and why.

Watch the silence that follows.

That silence is the sound of a half-asser. Someone who mistook tooling for knowledge, abstraction for understanding, and docker run for engineering.

One more thing worth saying: we're hitting the peak of CPU clock speed. We're not getting faster processors anymore — we're just throwing more cores at problems. The pendulum is getting ready to swing back toward "who can write the most performant code, based on a solid understanding of efficiency." We're not quite there, but it's coming in the next few years. If you're a half-asser who's been hiding behind abstractions and containers and frameworks you don't fully understand — you might find yourself flipping burgers before it's all over. Pay off your house before it comes true, or enjoy trailer park life. I started in a trailer park. I never want to go back.

Views expressed are based on decades of combined systems and software engineering experience — on both Windows and Linux, bare metal and cloud, development and operations — and the scars to prove which shortcuts were worth taking and which weren't.

Rating: (You must be logged in to vote)