Three of America’s most competitive AI companies are now sharing intelligence with each other—not about their models, but about who’s trying to steal them. OpenAI, Anthropic, and Google have begun quietly coordinating to detect and block Chinese AI firms from siphoning capabilities out of their systems through a technique called distillation. The effort runs through the Frontier Model Forum, an industry nonprofit all three founded alongside Microsoft in 2023, Bloomberg reported.It’s an unusual arrangement—these are companies that fight over the same customers, the same engineers, and increasingly, the same government dollars. But the threat has apparently grown large enough to set rivalries aside.
What distillation actually is, and why it’s become a problem
Distillation is standard practice in AI development. Companies use it routinely to build smaller, cheaper versions of their own models. What’s not standard is when a competitor uses the same technique—at industrial scale, through fake accounts and proxy networks—to clone your model’s capabilities without paying for the R&D behind them.Anthropic has put specific numbers to this. In February, the company named three Chinese labs—DeepSeek, Moonshot, and MiniMax—as having run coordinated extraction campaigns against Claude, racking up over 16 million exchanges through roughly 24,000 fraudulent accounts. MiniMax alone drove more than 13 million of those. To get around Anthropic’s ban on commercial access from China, the labs allegedly routed traffic through proxy services running networks of up to 20,000 fake accounts at a time, mixing extraction traffic with ordinary requests to avoid detection.OpenAI told the US House Select Committee on China in February that DeepSeek had kept up similar efforts against American labs through increasingly obfuscated methods. Google’s threat intelligence team flagged a surge in what it calls “model extraction attacks” on Gemini—one campaign alone generated over 100,000 prompts apparently designed to replicate the model’s chain-of-thought reasoning.
Beyond the business case: why Washington is paying attention
Both Anthropic and OpenAI have been careful to frame this beyond lost revenue. Models built through unauthorized distillation, they say, tend to shed the safety guardrails US labs spend heavily to build in—limits on things like bioweapon synthesis instructions or large-scale cyberattack assistance. Feed those stripped-down models into military or intelligence systems, and the problem compounds fast.US officials estimated to Bloomberg, on condition of anonymity, that unauthorized distillation costs Silicon Valley labs billions in annual profit. The Trump administration’s AI Action Plan has called for a formal information-sharing center partly to address this—suggesting at least some appetite in Washington to put structure around what is currently a quiet, informal arrangement between rivals.
The distillation fight is really about who controls the next decade of AI
The information-sharing effort is a start, but it papers over a structural problem. As long as proxy networks can spin up 20,000 fake accounts faster than any one company can shut them down, the math doesn’t work in the defenders’ favour. OpenAI has explicitly called for an “ecosystem security” approach—hardening not just individual labs but API routers, cloud providers, and payment infrastructure simultaneously. One weak link is enough.Meanwhile, the attacks are getting harder to spot. Chinese actors have moved well past simple output scraping into multi-stage pipelines that blend synthetic data generation with reinforcement learning—essentially automating the process of building a rival model on someone else’s foundation. Anthropic caught MiniMax mid-campaign and watched it pivot within 24 hours of a new Claude release, redirecting traffic to capture capabilities from the latest version. That kind of operational speed is difficult to counter without coordination that currently doesn’t exist at scale.

Leave a Reply