AI
—8 April 2026
—5 min read
Anthropic had two accidental code leaks in a single week. Here's what that actually tells us about the pace of AI development — and what every organisation trying to adopt it should take from it.
Anthropic had two accidental code leaks in a single week. Both attributed to human error. In one incident, 44 fully built features sitting behind "do not release" flags were accidentally exposed during a routine deployment, including visibility of a new model and an entire suite of unreleased functionality. In the other, over 500,000 lines of code leaked publicly, including source code behind Claude Code.
I've been vocal about leaning heavily into Anthropic and Claude over the past year, driven by model capability, feature depth, and the company's approach to safety. So when something like this happens, it warrants a direct response rather than a defensive one.
Here is what I actually think.
What the leaks revealed is genuinely staggering. The accidental exposure of 44 built-but-unreleased features is, paradoxically, one of the most exciting signals I've seen about what is coming. These are not concepts or roadmap slides; they are finished features being held back. The pipeline is full. The pace isn't slowing.
What the leaks exposed is equally sobering. A leading AI laboratory, one of the most technically capable organisations in the world, leaked its own source code because deployment processes couldn't keep pace with the speed of development. This is not a story about incompetence. It is a story about what happens when the rate of change outstrips internal process maturity. If the builders of these tools are navigating that tension, so is every organisation trying to adopt them.
What we should learn from it is the part that matters most.
Tool selection matters less than most organisations think. Claude, ChatGPT, Microsoft Copilot, Gemini — there will always be a new release, a new benchmark, a new feature set to evaluate. The pace is not stopping. Businesses pulling ahead right now are not winning because they picked the right platform. They are winning because they stopped treating AI as a technology decision and started treating it as an operating model evolution. There is a company-wide adoption philosophy sitting above any single platform choice, and that is what determines outcomes.
The second lesson is more operational. Software deployments at this pace will require human oversight for quite some time yet. Not as a bureaucratic checkpoint, but as a genuine control layer. The Anthropic incidents are a reminder that even the most capable teams in the field are moving faster than their own processes can reliably manage. Building that human review layer into your own AI workflows is not caution; it is good operating practice.
Does this change my view on Anthropic? No. The pace is real for everyone, including them. What matters is how they respond, and the response has been direct and accountable. That tells you more about an organisation than the incident itself.
Get in touch if you want to discuss what building a durable AI operating model looks like for your organisation.
Work With Patrick
Patrick works directly with leadership teams on AI strategy, digital retail, and commercial growth. If something here resonates, start a conversation.
Get in Touch