Home About Ventures Writing Principles Contact
ai

A note on coding agents and constraints

The main problem with AI coding agents is not model capability. It is how much room we give them to guess.

Observation

The more I use AI coding agents in real codebases, the less I think the main problem is model capability.

A lot of the time, the issue is simpler: the agent has too much room to guess.

Why it matters

That guesswork does not usually fail in some dramatic way. More often, it produces code that works, but drifts from the system: another validation pattern, a slightly different query shape, another folder convention, another abstraction the codebase did not need.

That kind of drift is easy to merge and annoying to absorb later.

Current view

My current view is that many teams are pulling the wrong lever. They focus on better models, longer prompts, more context, or more autonomy.

Those things can help, but they do not solve much if the working patterns of the repo are still implicit.

What has worked better for me is reducing guesswork by making constraints more explicit.

Practical implication

When an agent keeps producing inconsistent work, I think the first place to look is the constraint layer around it.

That usually means being more explicit about repo patterns, boundaries, conventions, preferred approaches, banned patterns, and review expectations.