What happened: The New Stack argues that “swarms” of AI coding agents don’t magically fix software delivery — they just recreate the same failure modes, because the real problem isn’t human laziness. It’s the gravitational pull of big batches of work.
Why it matters: If your plan is “add more agents” until complexity submits, the article suggests you’re about to discover a familiar law of nature: coordination costs scale faster than optimism. That’s true whether the team is carbon-based or prompt-based.
Wider context: Citing research into multi-agent software construction, the piece claims splitting a system across multiple agents can increase overhead enough to outweigh any speed gains — meaning delivery discipline (small changes, fast feedback) matters even more when AI is writing the code.
Background: The author uses an extended “project village” metaphor to dunk on the habit of blaming people for systemic delivery failures. The point: gravity was never willpower, and software failure was never just “bad developers” — it’s batch size, feedback loops, and the physics of coordination.
Why AI systems are failing in familiar ways — The New Stack
Singularity Soup Take: AI agents are great at producing more “stuff” per hour — and equally great at producing more hidden coupling per hour. If you don’t shrink the batch and tighten the feedback loop, you’re not building faster; you’re just accelerating into the same wall with better autocomplete.
Key Takeaways:
- Batch size is the villain: The article’s central thesis is that large batches of work create a “gravity” that drags projects into failure, and removing humans doesn’t remove the force — it just changes the excuse you use in the postmortem.
- Multi-agent coordination has a tax: Referencing experiments in multi-agent system construction, it argues that dividing work among agents can add enough coordination complexity to wipe out the theoretical gains of parallelism.
- Delivery discipline still wins: The implied prescription is Continuous Delivery-style habits: keep changes small, integrate often, and design workflows around rapid feedback — because AI can generate code faster than you can discover you’re wrong.
Related News
China’s Tech Giants Rush To Deploy OpenClaw Agents — Agents everywhere is the easy part; making them behave like a coherent delivery system is the hard part.
Nvidia Preps NemoClaw, an Open-Source Agent Platform — The tooling wave is real; the physics of coordination doesn’t care.
Rogue AI Agents Practice Cybercrime, Humans Call It Innovation — When you scale agents without tight controls, you don’t just get more output — you get more ways to be wrong at speed.