The Trump administration appeals a decision that prevented the Pentagon from taking action against Anthropic over an AI disagreement.
  • Elena
  • April 02, 2026

The Trump administration appeals a decision that prevented the Pentagon from taking action against Anthropic over an AI disagreement.

The administration of Donald Trump has moved to appeal a recent court ruling that blocked federal action against AI company Anthropic, intensifying an ongoing legal battle over the use of artificial intelligence in military operations.

Attorneys from the U.S. Department of Justice filed a notice in a San Francisco federal court confirming their intent to challenge the decision issued by U.S. District Judge Rita Lin. The ruling had temporarily prevented the Pentagon from labeling Anthropic as a national security supply chain risk and from enforcing a directive that barred federal agencies from using the company’s AI products, including its chatbot Claude.

Judge Lin criticized the government’s actions, stating that the broad measures taken against Anthropic appeared arbitrary and excessive. She warned that such steps could seriously damage the company’s operations and questioned the use of a rarely invoked military authority, which has historically been applied to foreign adversaries. In her remarks, she emphasized that existing laws do not support treating a domestic company as a national threat simply for disagreeing with government policies.

The Pentagon, led by Defense Secretary Pete Hegseth, strongly opposed the ruling. Emil Michael, a senior defense official and the Pentagon’s chief technology officer, publicly criticized the decision, arguing that it could limit the military’s flexibility in choosing technology partners and conducting operations effectively.

Judge Lin temporarily paused her order for a week, allowing the government time to file an appeal with the Ninth Circuit Court of Appeals. She also clarified that her ruling does not force the Pentagon to use Anthropic’s technology, nor does it prevent the department from switching to other AI providers.

The dispute stems from failed negotiations between Anthropic and the Pentagon over a defense contract. The company had insisted on maintaining safeguards to prevent its AI systems from being used in fully autonomous weapons or for domestic surveillance. However, the Pentagon argued that it should retain the authority to deploy such technology in any lawful manner.

In addition to this case, Anthropic has filed a separate legal challenge in a Washington, D.C. appeals court over another government effort to designate it as a supply chain risk under a different law.

The case has drawn significant attention across the tech and defense sectors. Several third parties, including Microsoft, industry organizations, technology workers, retired military leaders, and even religious groups, have submitted legal briefs supporting Anthropic’s position, highlighting broader concerns about the future of AI governance and the balance between national security and corporate rights.