Microsoft seeks court action to protect $5B Anthropic investment

Microsoft is asking a US court to block the Pentagon’s decision to temporarily classify the artificial intelligence company Anthropic as a supply-chain risk. The tech behemoth says such a move could disrupt the military’s access to advanced AI systems and risk billions of dollars invested in private companies.
Microsoft recently said it would invest up to $5 billion in Anthropic, and the company claims it needs a court order to prevent contracts and technology already in use by the government from suffering immediate damage.
Although the United States Department of Defense said it needs to defend its systems and operations, companies building AI tools are warning that abrupt restrictions could undermine partnerships and jeopardize America’s leadership in technology.
Microsoft has filed a motion in the United States District Court for the Northern District of California, seeking a judge’s provisional restraining order that would prevent the Pentagon from applying its ban to Anthropic’s technology in all existing defense contracts.
In that filing, Microsoft said such an order would provide time to implement a smoother deployment and avoid disruption to the military’s continued use of artificial intelligence tools. Without the restraining order, Microsoft warned that any companies operating on behalf of the Pentagon could be forced to rapidly transition products and contract terms that now depend on Anthropic’s AI models.
This shift, the company said, could have ramifications for the Defense Department’s operations. “This may potentially disrupt US warfighters at a crucial moment,” Microsoft said in the filing. Microsoft submitted the request as an amicus brief, so it is not directly involved.
However, the court’s ruling would have a potential “material impact” on its business and the overall industry, according to the company. The cost of its financing also influences the company’s participation.
Microsoft plans to invest up to $5 billion in Anthropic, one of the fastest-growing artificial intelligence firms in the United States, in November. Microsoft’s a huge investor in OpenAI, a rival developer.
Pentagon labels Anthropic a supply-chain risk
The controversy erupted last week when the Pentagon formally barred Anthropic’s technology from defense contracts, and it designated the company a supply-chain risk.
This label has traditionally been associated with companies tied to foreign adversaries. Such contractors working with the Defense Department under the order must certify that Anthropic’s AI models are not used in systems or services linked to Pentagon work.
Anthropic quickly sued the department over its decision, alleging that the designation was both unprecedented and unlawful, and charging the federal government. The company said the ruling could damage its business significantly and threaten contracts worth hundreds of millions.
The debate centers around Anthropic’s AI models, called Claude. The company had been negotiating with the Pentagon regarding how the technology would be used, but the talks broke down. Anthropic wanted assurances that its systems would not be used to conduct fully autonomous weapons or for mass domestic surveillance.
As the situation unfolds in the US, Anthropic is planning to open a new office in Sydney in the coming weeks as it expands its presence in Australia and New Zealand. According to the company’s Economic Index, the two countries rank fourth and eighth globally in per capita Claude.ai usage. The Sydney office will become Anthropic’s fourth hub in the Asia-Pacific region.
Tech workers and AI researchers back Anthropic
The controversy has also drawn support for Anthropic from across the artificial intelligence community. More than 30 employees from OpenAI and Google DeepMind filed a statement supporting Anthropic’s lawsuit. Among the signatories was DeepMind’s chief scientist, Jeff Dean.
In the court filing, the researchers argued that the government’s designation was an arbitrary use of power that could harm the broader AI industry.
They noted that if the Pentagon was dissatisfied with its contract with Anthropic, it could simply have ended the agreement and chosen another provider instead of labeling the company a supply-chain threat.
The employees also warned that the move could undermine US competitiveness in artificial intelligence by discouraging open discussion of the technology’s risks and limits.
Shortly after the Pentagon announced the designation, the Defense Department signed a deal with OpenAI, a development that some OpenAI employees reportedly protested.
* The content presented above, whether from a third party or not, is considered as general advice only. This article should not be construed as containing investment advice, investment recommendations, an offer of or solicitation for any transactions in financial instruments.





