Parallelism Was a Workaround. Agents Make It Optional.
I’ve been running my current project one issue at a time for months. Plan, implement, QA, ship. No long-lived branches. No merge conflicts. No “wait, which version of this file is right?”
It’s calmer than any workflow I’ve had. It’s also faster.
I think I know why, and I think the implication is bigger than personal workflow: parallel development was never really about moving fast. It was about humans being slow. And if that’s true, agents change the math.
Why we parallelized
When a feature takes a human five hours, you can’t queue five of them behind one person. The other four sit idle. So you split the work, branch out, and pay the merge cost at the end. Parallel wasn’t a virtue — it was the least-bad option when serial would leave most of the team waiting.
All the tooling we built around this — feature branches, PR reviews, conflict resolution, integration testing — exists to make parallel work survivable. It’s overhead we pay to keep people busy.
What changes with agents
In my current workflow, a well-specified task takes an agent five to ten minutes. Not always clean, but the loop to get it clean is short too.
At that speed, the cost of coordinating parallel work starts to dominate the cost of doing the work. Context switches between branches, merge conflicts, cross-reviews, holding two feature specs in your head at once — all of it was bearable when the underlying task took five hours. When the task is ten minutes, the overhead is the bottleneck.
Inside a single context, serial is faster than parallel plus merge.
But parallelism doesn’t die — it moves
Here’s where I want to challenge that.
Parallelism never really existed to solve slowness. It existed to prevent independent work from blocking other independent work. Two people touching two unrelated services were never a “parallelism” problem — they were two people doing two different things.
Agents don’t change that. Two agents working on two independent services with no overlap aren’t two problems. They’re two workers. The merge cost is zero because there’s no merge.
So the hypothesis isn’t “parallelism is dead.” It’s that the useful unit of ownership shrinks.
When a human could spend five hours on a task, you needed a team of five to eight people per service to absorb the throughput. With agents, one or two people can sustain similar output. The unit drops from “a team per service” to “one or two people per service.”
Team structure still shapes what you ship. But the team that ships a service can now be one person plus their agents.
What this implies for larger teams
If the unit drops to one or two people, a team of twenty isn’t “a team of twenty with agents.” It’s ten to twelve independent pairs, each owning a repo or service end to end.
The hard conversation stops being “how do we coordinate twenty people?” and becomes “where are the boundaries between services?” Because service boundaries are now team boundaries.
That’s a different kind of engineering org. Fewer meetings about who’s working on what. More work on architecture: what belongs together, what doesn’t, what contracts the services share. The organizational problem collapses into the architectural one.
Honest objections
I want to be careful here. I’ve only run this at a scale of one. Everything below is stuff I’d need to answer before claiming this works at forty engineers.
Bus factor. One-person ownership is the classic anti-pattern. If the only person who understands a service leaves, gets sick, or takes vacation, the service is unsupported. What I’d try: a rotating pairing between owners, documentation as a first-class artifact (I already write specs and plans into a docs/ folder the AI reads — could extend to cover service knowledge), and the AI itself as a second brain over the codebase. Still a real risk, and I’m not going to pretend otherwise.
Not every task is ten minutes. Cross-service refactors, production incidents, architecture decisions, anything that needs judgment calls between humans — these still need synchronous coordination. The hypothesis applies to normal development flow, not to every kind of work. Incidents and big refactors still pull owners together.
Solo bias. I’m one person working on one product. The dynamics of a real team — different opinions, different speeds, different judgment — aren’t the same as me working alone. It’s entirely possible that what feels optimal at n=1 breaks at n=20 for reasons I can’t see from here.
So: hypothesis, not conclusion.
Where I’d place the bet
What I’m fairly confident about: inside a single context, parallel work with agents is usually more expensive than serial.
What I’m less sure about but willing to bet on: the right response isn’t to force parallelism anyway. It’s to shrink the unit of ownership until serial inside the unit is the right default, and parallelism happens naturally between units, where there’s no overlap.
If your team of ten is fighting over the same repo, the problem is probably not the process. It’s the size of the team on that repo.