
One Flew Over the Context Window
We traded the reward of creation with the liability of owning.
I miss having nothing to do. Sitting on my couch without feeling anxious that I have agents doing something on my behalf.
Eight agent sessions are open on my screen right now. I need to pick up where I left off on one of them. But all I have is questions, not answers. What I’m certain of is that some of the 8 are hotfixes for hotfixes the agent verified and gave me false confidence it would work.
A year ago, I went all in on AI coding agents. Spent thousands on tokens. Built a Claude Code plugin called hope. The whole idea was: if output is cheap, make thinking systematic. Maintained it as the ecosystem shifted under me. Everything I could produce expanded overnight.
Writing got cheap, and I exhausted it. But clarifying intent, shaping it, and verifying output takes so much effort. I still have to own decisions and react to their consequences.
I got greedy and committed to every feature and every doc before verifying it was worth it. I ended up owning way more than I used to own. And this, my friends, is the pain of the craft. We substituted the reward we got from creating with the suffering of owning.
Here’s what my morning looks like. I open the laptop. Eight sessions from yesterday. I pick one. Scroll through the code. Nothing clicks. I read the commit messages. Still nothing. I was in the room when this happened. But being a reviewer with divided attention is not the same as being an author with single focus.
On the surface it looks like the best-documented, most ceremonial engineering you’ve seen. Underneath it’s the output of 1/8 attention. We got ourselves the colleagues we would never hire. Very popular and productive but with zero accountability and fake urgency.
The friction the ease of generation removed had a side effect we need to restore. Moving slowly meant you understood what changed. You built your mental model while committing to the change. You got the time to experience the edge cases and mitigate them. You can’t start at the end and pretend everything is just fine.
Your colleagues still need the same time to build their mental models. To experience the failures. If we all don’t go through these, who does? The agent pretends it went through the whole experience and that’s why it’s fast. If you made it really go through the whole experience, you’d discover that it’s as slow as you. But still you put the same time, more effort, and got the same output but without you and your team’s mental model evolving.
AI is stateless. It carries nothing between sessions. No memory of last month’s outage. No pain from the migration regressions. Plug in the most sophisticated memory layer you could get your hands on. Do you see a difference? Do you feel a difference? Did you learn more about what matters to you and your product?
Do you feel confident in what you know about what you own? Would you risk your mental health on a bet that the agents will handle everything the world throws at you?
Each agent you use to make a change moves to the next session without learning anything new or even remembering what it just did. Let’s pretend retrieval is the answer. At the start of every session let’s hope the agent will retrieve the correct files and wait for it to explore the codebase and the git history. How is this unnecessary effort compounding?
You open multiple sessions, right? Now every interaction costs Agent(context setup) + Human(context switching). Decades of research show it takes around 20 minutes for engineers to regain focus after switching context. With agents, from my experience, the time is less than 20 minutes. Maybe 5 minutes. But those 5 minutes require high energy. Every time. It’s not sustainable to operate at high effort all the time.
We can be heroes when there is a reward. But we can’t make that the new norm. Trying to be a hero is a trap because guess what? You aren’t one. You’re a regular human using the same brain your ancestors have been using since the stone age. Same brain plus lived experiences. Compounding collaborative knowledge is what evolved over the years, not the biology. Will the desire to be a hero normalize us getting a chip planted so we can bypass the cognition bottleneck?
And before you say just use markdown properly, yay simplicity wins, I did. I had thousands of lines trying to encode what I knew. It was great. I started packaging it and sharing it with my friends. It appeared like we all do what checks the list. We got comfortable and started relying on it and took shortcuts to speed things up, to maximize our velocity. But I feel distracted. I feel tired of the velocity. And I can’t imagine what could be next reliably and confidently. I lost the connection with what I stand behind.
Generation scales. But what about ownership, the ability to form a vision?
Use AI. I use it all the way. It’s my new hammer, practically my new window to the world. But use it intentionally. It doesn’t give you superpowers.
Think of it as a new type of value. Computers used to deal with booleans, numbers, strings, all in a deterministic way. Now we’ve managed to model non-deterministic values and we need to find the proper way and place to use that kind of earned capability.
AI is not a new layer of abstraction in the same sense as C to assembly. The C compiler took ownership of the assembly. Deterministically. Same input, same output. You never reviewed the assembly. You never maintained it. AI doesn’t take ownership of anything. It hands you artifacts and walks away. That’s not a new abstraction layer. That’s a careless over-confident colleague with zero stakes.
Don’t rush and don’t feel left behind. The agentic coding products are working hard to reframe and hide the cognition problem, and that is great. However, UX, psychology, and all the known solutions and theories we have have no workaround to scale our cognition.
I stopped chasing all the prompting techniques and new solutions over-promising to eliminate my struggle. They end up deepening it. My flow today is one focused session, one well-thought PR at a time. Being experienced slowly is way better than flooding the team with a stack of 10 PRs and getting upset they don’t have time to review it… because surprise, they produced their own stack of PRs too. And the agents generated tons of nonsense comments that cover any chance to think clearly and align as a team. That window is not there anymore.
The code is the truth. For now. It runs or it doesn’t. That’s more honest than anything above it.
Code is the most honest layer we have. And it’s not honest enough. Too scattered to own at scale, too deterministic to replace with English. We were already losing ground-level understanding. No one person held a whole system.
The debt we have in the backlog is a signal that pre-AI we lacked a shared understanding. AI could close that gap or widen it. Depending on how intentional and experimental you are when using it.
We’re still looking for the layer where intent and execution meet clearly enough that a person can hold the whole thing.
Until we have that or plant a chip in our brain, someone has to stand on the ground floor. That someone is you.