Anthropic CEO predicted '90% of Code will be made by AI in 3–6 Months' The problem? The prediction was 6 months ago
On March 10, 2025, Anthropic CEO Dario Amodei said we’d be “there in three to six months, where AI is writing 90% of the code” and that within a year AI might be writing “essentially all of the code” reference
Guess what? It’s August 25, 2025. And except if you live in other world, that didn’t happen
But hold-on my AI hater friend, this isn’t a victory for “AI can’t code” It can, and it’s getting better fast. But 90% across the board was never realistic for 2025, there’s still a bunch of challenges to overcome like the simple energy consuming that the big techs are still figuring it out.
Here’s the sober breakdown.
What actually happened Adoption surged, not full automation. 84% of developers use or plan to use AI tools; 51% of pros use them daily. That’s widespread augmentation, not near-total replacement. Stack Overflow+1 AI writes a lot of code, just not 90%. GitHub’s own data (yes, from 2023-24) pegs Copilot at ~46% of code in files where it’s enabled, and suggestion acceptance around ~30% in enterprise studies. That’s meaningful, but nowhere near 90% across entire systems. The GitHub Blog+1 Autonomous agents still stall on real repos. On the rigorous SWE-bench Verified tasks, top models this year are solving on the order of one-third to two-fifths-not “nearly everything” That gap is exactly where production work lives. SWE-bench Industry leaders stayed cautious. IBM’s CEO Arvind Krishna publicly estimated closer to 20–30% of code could be AI-written, pushing back on the 90% narrative. TechCrunch Yes, outliers exist. YC’s Garry Tan said 25% of W25 founders reported 95% LLM-generated LOC. Startups in greenfield code with aggressive risk tolerance are not the median enterprise. X (formerly Twitter) Press enter or click to view image in full size
Why the 90% timeline missed reality
“Lines of code” is the wrong yardstick. AI is fantastic at scaffolding, boilerplate, tests, adapters, and CRUD. It’s weaker at system integration, invariants, and edge-conditions — the parts that dominate serious software. Counting LOC inflates perceived progress because boilerplate lines are easy and abundant. Autonomy gap: from snippets to shipped systems. Building software ≠ emitting functions. It’s wrestling repos, migrations, flaky tests, service contracts, secrets, IAM, CI/CD, rollbacks, and tickets. Agentic coding still falters on long-horizon, tool-rich workflows; public benchmarks show far from 100% task completion. SWE-bench Trust and safety slowed enterprise rollout. Multiple studies show AI-generated code is prone to Again, we are under the full of predictions, AI these, that…. but what you have to do is to work on now. Having a full knowledge on whats is going on, and with this you will be able to say if and when something is about to change. Really.
Again, we are under the full of predictions, AI these, that…. but what you have to do is to work on now. Having a full knowledge on what’s going on, and with this you will be able to say if and when something is about to change. Really.
The real work is just beginning
We’re still figuring out the fundamentals. How powerful can Anthropic’s MCP actually be for connecting AI systems to our development workflows? What can we build on top of that foundation?
Tools like agents.md are exploring how to dramatically improve context for AI coding in our actual repositories, not just isolated snippets. Because here’s a frustrating reality: every time you start a new session in Claude Code or similar AI assistants, you’re starting from zero context. No memory of your codebase structure, your team’s conventions, your ongoing refactoring, your technical debt priorities.
That context gap is massive. It’s the difference between an AI that suggests generic boilerplate and one that actually understands your system’s architecture, your database schema, your deployment pipeline, your team’s coding standards.
The next 6 months matter more than the last 6
The companies solving context persistence, repository understanding, and long-term coding workflows, those are the ones that might actually deliver on the automation promises. Not through grand predictions, but through solving the grinding daily problems developers actually face.
So yes, stay informed about the capabilities. Track the benchmarks. But more importantly: experiment with the tools available today, understand their limitations, and position yourself to recognize when the real breakthroughs happen. Because when they do, you’ll want to be ready, not caught off guard by another CEO prediction that misses the mark.
References
https://www.cfr.org/event/ceo-speaker-series-dario-amodei-anthropic https://survey.stackoverflow.co/2025/ai?utm_source=pvgomes https://techcrunch.com/2025/03/11/ibms-ceo-doesnt-think-ai-will-replace-programmers-anytime-soon/?utm_source=pvgomes.com https://www.swebench.com/?utm_source=pvgomes.com https://agents.md/