What I Mean When I Say I Cannot Code
I do not mean I have no technical judgement. I mean the act of writing code by hand was never the part I was good at, and certainly not the part I enjoyed. I spent years in infrastructure and application support. I understand deployment risk, operating environments, networking, databases, security posture, production failure modes, and the difference between something that merely runs and something that will survive contact with real users.
What used to slow me down was turning all of that into syntax. I could describe the system I wanted. I could describe the constraints. I could explain how it should fail, recover, log, authenticate, and deploy. The tedious part was always the translation layer between that understanding and the literal code.
What The Current Tools Actually Do
The reason this changed so sharply is that the tools are no longer just autocomplete. Anthropic's current Claude Code docs describe it as an agentic coding tool that can read a codebase, edit files, run commands, and integrate with development tools. Anthropic's product pages also position it across the terminal, IDE, Slack, and the web, which is a very different category from a chat box that occasionally writes snippets.
OpenAI's current Codex docs make the same shift obvious from a different angle. The official quickstart positions Codex in the app, IDE, CLI, and cloud. OpenAI's own cloud workflow documentation says you can launch coding tasks, monitor them in the background, review diffs, and create pull requests directly.
That matters because it means the tool is not just producing code on demand. It is participating in the work: reading context, modifying files, running tasks, and helping move something towards a deployable state.
This is the bit that changed everything for me: the machine is finally capable of doing the syntax-heavy part at a useful level, which leaves me free to spend my time on the higher-value part I already knew how to do.
Why My Background Still Matters
None of that makes domain knowledge optional. In some ways it makes it more important. Claude Code and Codex can implement quickly, but they still need direction. They need someone to explain the architecture, the operational constraints, the security boundaries, the deployment environment, the likely failure modes, and the definition of done.
That is where I have an advantage. I know what a healthy system looks like. I know what bad logging looks like. I know when session handling is going to become a problem. I know when a deployment pipeline is too brittle, when a background job strategy is going to hurt later, and when a database design is fine for a demo but wrong for a real workload.
So when I work with these tools, I am not delegating judgment. I am delegating implementation. I specify the shape of the system, the operational realities, and the business constraints. The agent helps me produce the code much faster than I would ever write it myself.
What Building Looks Like For Me Now
In practical terms, the workflow is simple. I describe the problem in operational language, not just feature language. I explain the environment, the dependencies, the user flow, the failure conditions, the security assumptions, and how I expect the system to be verified. Then I let the coding agent turn that into an implementation I can review and refine.
That is why I can now build applications that are meaningfully closer to production grade than the sort of prototypes I used to stall out on. The agents are doing the code-heavy lifting. I am supplying the production instincts.
OpenAI's current Codex use-case material talks about onboarding large codebases, migrations, and task execution. Anthropic's Claude Code documentation emphasises persistent context, tools, workflows, and memory. Those are exactly the sorts of capabilities that make these systems useful to somebody like me. I am not asking for a toy example. I am asking for help turning operational intent into working software.
What AI Does Not Replace
It does not replace architecture. It does not replace debugging judgment. It does not replace operational accountability. It does not replace security review. And it absolutely does not replace the ability to tell the difference between something that merely passes a happy-path demo and something that is actually fit to run.
That is why I do not buy the simplistic framing that these tools either replace developers entirely or only help beginners write faster snippets. In reality they are leverage. If you already understand systems deeply, they let you express that understanding much more directly. If you do not, they can help you move faster towards mistakes you still do not recognise.
Bottom Line
I still would not describe myself as someone who loves coding in the traditional sense. What I love is building useful systems that behave properly under real-world constraints. The arrival of serious coding agents means I can now do far more of that work myself.
So yes, I still say I cannot code. But in 2026 that sentence means something different. It no longer means I cannot build. It means I finally have tools that handle the part I never wanted to do, whilst letting me focus on the part I actually understand.
Cover image sources: Claude Code by Anthropic and OpenAI Codex official product/docs pages, captured on April 8, 2026.



