Karo Startup Logo
Chinese AI Firms Allegedly Used Anthropic’s Claude to Train Their Own Models
News

Chinese AI Firms Allegedly Used Anthropic’s Claude to Train Their Own Models

4 hours ago
98 views

In a major development that raises serious concerns about AI ethics, platform security, and global technological competition, U.S.-based AI company Anthropic has alleged that several Chinese artificial intelligence firms used its chatbot Claude to improve their own AI systems.

According to the company, Chinese AI startups including DeepSeek, Moonshot AI, and MiniMax generated more than 16 million interactions with Claude using approximately 24,000 fake accounts. Anthropic claims this activity violated its terms of service and regional access policies.

What Happened?

Anthropic reported that the firms allegedly used large-scale automated interactions with Claude to extract high-quality responses. These responses were then used to train their own AI models through a technique known as model distillation.

The company stated that the accounts involved were created using fabricated identities and were designed to bypass geographic restrictions, allowing access from regions where the service may not be officially available.

This level of coordinated activity, according to Anthropic, represents one of the largest known attempts to systematically harvest outputs from a commercial AI system.

Understanding Model Distillation

Distillation is a widely used machine learning technique in which a smaller or less capable model learns from the outputs of a more advanced model.

Here’s how it works:

  • A powerful model (the “teacher”) generates high-quality answers.
  • These answers are collected at scale.
  • A smaller model (the “student”) is trained to mimic the teacher’s behavior.
  • The result is a cheaper, faster model that retains many of the original capabilities.


While distillation itself is a legitimate research method, using another company’s proprietary AI outputs at scale—especially by bypassing access controls—raises legal and ethical concerns.

Why This Matters

1. Intellectual Property Concerns

AI companies invest billions of dollars in research, infrastructure, and training data. If competitors can replicate capabilities simply by extracting outputs, it undermines the business model of leading AI developers.

2. Platform Security Risks

The use of 24,000 fake accounts highlights the growing challenge of preventing automated misuse, data scraping, and large-scale exploitation of AI platforms.

3. Global AI Competition

The incident reflects the intensifying race between U.S. and Chinese AI companies. As nations compete for leadership in artificial intelligence, concerns around technology transfer, misuse, and regulatory enforcement are becoming more prominent.

4. National Security Implications

Anthropic warned that if models trained through such methods are open-sourced, their capabilities could spread globally without oversight. This could potentially amplify risks related to misinformation, cyber threats, or other misuse scenarios.

The Companies Named

The firms mentioned in the report are among China’s emerging AI players:

  • DeepSeek – Known for developing competitive large language models.
  • Moonshot AI – Focused on long-context AI systems.
  • MiniMax – Developing conversational AI and multimodal models.

None of the companies have publicly confirmed Anthropic’s claims at the time of reporting.

The Bigger Picture: The AI Data War

This incident highlights a growing challenge in the AI industry: output scraping and synthetic data harvesting.

As leading models become more capable, their outputs themselves become valuable training data. This creates a new type of competition where companies may attempt to replicate capabilities without direct access to original training datasets.

To counter such risks, AI companies are increasingly investing in:

  • Stronger rate limits and monitoring
  • Identity verification systems
  • Geographic access controls
  • Watermarking and output tracking technologies

What This Means for the Future

The case highlights a major shift in the AI landscape — the race is no longer only about building powerful models, but also about protecting AI capabilities and intellectual property. As artificial intelligence becomes a strategic technology, companies are expected to implement stronger security measures, including tighter user verification, advanced monitoring, and stricter access controls to prevent large-scale misuse.

At the same time, the industry may see increased legal action against the unauthorized use of AI outputs, as organizations move to protect their technology and data. Governments are also likely to introduce greater oversight on cross-border AI access and technology transfer, recognizing AI as a critical national asset.

Another important shift is the growing debate between open-source AI and controlled access. While openness encourages innovation, tighter control is increasingly seen as necessary to prevent misuse and uncontrolled capability spread.

The allegations by Anthropic highlight a new frontier in the AI competition—where the battle isn’t just about innovation, but also about safeguarding the intelligence behind it.

Follow Karostartup  for more insights into the intersection of technology, policy, and the future of India.

Quick Share