Accueil English Big Tech keeps Anthropic’s Claude on the menu—despite a Pentagon blacklist

Big Tech keeps Anthropic’s Claude on the menu—despite a Pentagon blacklist

42
0
Ingénieurs discutant de modèles d'IA Anthropic dans un bureau moderne
Une équipe d'ingénieurs travaille sur des projets IA utilisant Anthropic, © TechCorp

Microsoft, Google, and Amazon are telling customers they can keep using Anthropic’s Claude AI—so long as the work isn’t for the U.S. Defense Department.

The reassurance comes after the Pentagon slapped Anthropic with a rare “supply chain risk” designation, a label typically aimed at foreign adversaries, not a U.S. startup. The move raised alarms across the AI industry. But the three cloud giants say the Pentagon’s action doesn’t block Claude from commercial use on their platforms.

In plain terms: Claude is still available for most businesses. It’s just effectively off-limits for certain Defense Department-related projects.

Why the Pentagon’s “supply chain risk” label matters

The Defense Department’s “supply chain risk” designation is a serious warning flag inside federal procurement. It can restrict how government agencies buy or use a company’s technology, especially in sensitive environments.

What makes this case unusual is who got tagged. Anthropic is an American AI company, best known for Claude, a large language model that competes with OpenAI’s ChatGPT and Google’s Gemini. According to reporting cited by multiple outlets, the designation followed Anthropic’s refusal to provide what the Pentagon viewed as sufficiently broad access to its technology for certain uses the company considered insecure.

The result is a split-screen reality: heightened caution—and limits—inside defense work, while the commercial AI market keeps moving.

Microsoft: Claude stays for non-defense customers

Microsoft was among the first to publicly calm customers, saying Anthropic models—including Claude—will remain available in Microsoft products for clients not tied to Defense Department work.

Microsoft’s position rests on legal and compliance reviews concluding that the Pentagon’s designation doesn’t prohibit commercial collaboration. The company also has major financial incentives: it has been linked to a potential $5 billion investment in Anthropic—about $5.4 billion in today’s dollars.

For businesses building customer service bots, internal knowledge tools, or workflow automation, Microsoft’s message is simple: your Claude-based projects shouldn’t be disrupted.

Google: Partnership intact on Google Cloud

Google, which has invested roughly $3 billion in Anthropic—about $3.3 billion—also says Claude remains available for commercial projects through Google Cloud.

That matters because Google sells AI infrastructure to advertisers, developers, and enterprise customers who want ready-to-deploy models alongside cloud computing. Cutting off access would create immediate headaches for teams that have already built products around Anthropic’s tools.

Instead, Google is signaling continuity: the Pentagon’s designation may reshape defense-related work, but it won’t derail Claude’s role in the broader cloud market.

Amazon: AWS customers can keep using Claude—except for Defense work

Amazon, the dominant player in public cloud through AWS, has likewise reaffirmed that Claude will remain available to AWS customers, excluding projects tied to the Defense Department.

Amazon’s stake in Anthropic is even larger—reported at about $8 billion, or roughly $8.7 billion. AWS has positioned itself as Anthropic’s primary cloud partner, meaning Claude is deeply woven into Amazon’s AI strategy and product lineup.

By drawing a bright line around Defense Department work while keeping commercial access open, Amazon is trying to protect both its government relationships and its AI business momentum.

What this means for the AI industry

The episode highlights a growing tension in American tech: Washington wants tighter control over AI systems that could touch national security, while the private sector is racing to deploy the same tools across the economy.

It also sends a signal to startups and investors. Even if a company isn’t accused of wrongdoing, a federal risk designation can still ripple through partnerships, procurement, and reputation—forcing cloud providers and customers to rethink compliance and contingency plans.

For now, Microsoft, Google, and Amazon are betting that the commercial demand for Claude outweighs the political and regulatory turbulence—and that the Pentagon’s concerns can be contained to defense-specific lanes without freezing the broader AI market.

Key Takeaways

  • Anthropic remains available for non-defense projects despite being blacklisted.
  • Microsoft, Google, and Amazon are actively supporting their partnership with Anthropic.
  • The situation highlights tensions between national security and commercial interests.

Frequently Asked Questions

What is the supply chain risk designation?

It is a classification used by the Pentagon to identify companies considered potential national security risks, typically applied to foreign adversaries.

Why do Microsoft, Google, and Amazon continue to work with Anthropic?

These companies believe the designation does not prevent them from collaborating with Anthropic on non-defense projects, given the strategic importance of AI in business.

4.4/5 - (11 votes)

En tant que jeune média indépendant, The Inquirer 🇫🇷 a besoin de votre aide. Soutenez-nous en nous suivant et en nous ajoutant à vos favoris sur Google News. Merci !

Suivez-nous sur Google News