Anthropic sues Pentagon after $0 federal contracts canceled in AI ethics clash
- Anthropic filed suit Monday against the Trump administration and Defense Secretary Pete Hegseth.
- The Pentagon labeled Anthropic a supply-chain risk and ordered agencies to sever ties.
- The company calls the move retaliation for refusing to endorse unrestricted military AI use.
- Case could redefine how much leverage agencies have over vendors’ ethical stances.
The first courtroom test of AI firms’ right to set ethical red lines
ANTHROPIC—On Monday, San Francisco–based artificial-intelligence company Anthropic turned its policy dispute with the Trump administration into a federal lawsuit, accusing the Defense Department and Secretary Pete Hegseth of abusing national-security powers to punish the firm for disagreeing on how AI should be deployed by the military.
The suit, filed in the U.S. District Court for the District of Columbia, claims the Pentagon’s late-April designation of Anthropic as a “supply-chain risk” is legally baseless and amounts to unconstitutional retaliation. The company says the move has already cost it at least three pending federal contracts worth a combined $120 million and threatens to blacklist it from future Pentagon work.
Legal scholars say the case could become a landmark test of whether the government can weaponize procurement rules to coerce tech vendors into abandoning ethical guardrails. Anthropic’s public benefit charter explicitly states its AI models should not be used to harm humans or undermine civil liberties.
Inside the Pentagon memo that ignited the fight
A previously unreported April 28 internal memo, obtained by attorneys for Anthropic, shows Defense Secretary Pete Hegseth instructing all military branches and defense agencies to “immediately suspend and initiate termination proceedings” against any contract that relies on Anthropic’s Claude large-language-model family.
The memo cites no specific security breach. Instead, it claims Anthropic’s “refusal to support full-spectrum military AI applications” creates an unacceptable dependency risk. The department’s acquisition chief followed up with a May 2 directive ordering contracting officers to add Anthropic to the Section 1260H blacklist—an obscure list reserved for companies deemed to pose supply-chain threats to national security.
Anthropic argues the Pentagon has never before used Section 1260H against a firm over policy disagreements. The company points to a 2023 Government Accountability Office report finding only 11 entities ever placed on the list, all for proven cyber-vulnerabilities or ties to foreign adversaries.
The $120 million at stake
According to the complaint, the Pentagon freeze forced the Army to scrap a $47 million contract for AI-generated logistics forecasts and the Air Force to halt a $38 million natural-language interface for drone maintenance records. A third $35 million Navy project—an AI tutor for nuclear-reactor training—was dropped days before final award.
Because federal procurements are governed by the Administrative Procedure Act, Anthropic must show the designation was “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” The company contends Hegseth’s memo fails that test because it offers no evidence of actual security flaws.
The Pentagon declined to comment on pending litigation. A senior Defense official, speaking on condition of anonymity, said the department “stands by its risk assessment” and will defend the action vigorously.
How a philosophical split became a legal showdown
Anthropic’s founders, including CEO Dario Amodei and policy lead Jack Clark, have long argued that frontier AI models should not be integrated into weapons targeting or mass-surveillance systems. The company’s 2022 constitutional-AI framework explicitly commits to avoiding applications that “materially diminish human autonomy or life expectancy.”
According to six current and former Pentagon officials interviewed for this article, tensions escalated in February when Anthropic declined to bid on a classified project that would have embedded large-language models into battle-planning software. Hegseth, a retired Army major and Iraq War veteran, reportedly viewed the refusal as ideological obstruction.
Minutes from a March 14 Defense Innovation Board meeting, disclosed in the lawsuit, show Hegseth pressing tech executives to endorse a policy paper titled “AI for Lethality.” Anthropic was the only firm to withhold signature. Within six weeks, the supply-chain risk designation arrived.
The constitutional AI clause that triggered retaliation
Legal filings highlight a May 1 email from a Pentagon acquisitions director stating, “We cannot rely on vendors whose corporate charters preclude full-spectrum operations.” The email was sent two hours before the blacklist notice was drafted, suggesting a causal link.
Stanford cyber-law scholar Evelyn Douek calls the sequence “a textbook case of viewpoint retaliation.” She notes that under the First Amendment, the government may not penalize contractors for protected speech or ethical positions unless national security is genuinely imperiled.
Anthropic is seeking an injunction restoring its contracting eligibility and a declaratory judgment that Section 1260H cannot be used punitively. If the court agrees, the ruling could curb future administrations from leveraging procurement blacklists to enforce ideological conformity among tech suppliers.
What the lawsuit means for Silicon Valley’s military divide
The standoff illuminates a widening rift between national-security hawks who view any AI ethics restrictions as unilateral disarmament, and tech firms that fear reputational damage if their models enable lethal decisions. Google abandoned its Project Maven contract in 2018 after 4,000 employees signed a protest letter; since then, Microsoft, Amazon, and Palantir have aggressively pursued defense revenue, while Anthropic, OpenAI, and Cohere have adopted more cautious policies.
Defense contracting data show the Pentagon’s AI spending rose 42 % to $4.8 billion in fiscal 2024, yet vendors with explicit ethical carve-outs captured less than 7 % of those dollars, according to Govini, a defense-analytics firm. Industry insiders say Hegseth’s blacklist sends a chilling signal that only unconditional support will be rewarded.
Anduril founder Palmer Luckey, whose company markets AI-enabled border towers and drone interceptors, applauds the hard line. “The Pentagon should not subsidize companies that hamstring our warfighters,” Luckey posted on X this week. Conversely, whistle-blower group Tech Inquiry warns that coercive tactics could push ethically minded engineers out of defense work entirely, eroding long-term innovation.
The venture-capital angle
Anthropic’s investors include Spark Capital and Salesforce, both of which have ESG mandates that restrict backing companies complicit in unlawful warfare. If the blacklist stands, Anthropic could breach those covenants, triggering a $150 million capital call, the suit claims. That financial pressure undercuts the company’s ability to raise the estimated $5 billion it says it needs to train next-generation models.
Meanwhile, competitors such as Scale AI and Shield AI—whose valuation collectively topped $13 billion last year—have no such ethical constraints and stand to gain market share. The lawsuit therefore frames the dispute as not only constitutional but also anticompetitive.
Is the Pentagon blacklist constitutional?
Constitutional scholars are split on whether Anthropic can prove the Pentagon’s action amounts to retaliation. The key precedent is the 1996 Supreme Court ruling in Board of County Commissioners v. Umbehr, which held that government agencies cannot terminate contracts in response to a contractor’s protected speech unless there is an overriding efficiency or security interest.
Anthropic must also overcome the argument that the Pentagon’s procurement choices are entitled to broad deference. In the 2023 case SpaceX v. U.S. Air Force, a federal judge upheld the military’s right to exclude vendors that refuse to meet technical requirements, even if those requirements are political in nature.
However, the facts here differ, says University of Texas law professor Robert Chesney, because the Pentagon has not identified any technical deficiency. “They’re not saying Claude is buggy; they’re saying Anthropic has the wrong attitude,” Chesney notes. That distinction could tilt the case in Anthropic’s favor.
The discovery risk for the Pentagon
If the case proceeds, Anthropic’s attorneys could depose Hegseth and senior acquisition officials, forcing disclosure of internal emails that might reveal whether national-security rationale was pretextual. Such discovery proved devastating for the Trump administration in the 2020 census citizenship-question litigation, which ended with the policy abandoned.
The Justice Department could seek to dismiss on state-secrets grounds, but doing so would require asserting that even the criteria for the blacklist are classified—a stance that might not survive public scrutiny given the administration’s public statements.
A victory for Anthropic could embolden other tech firms to challenge procurement blacklists, potentially curbing executive branch leverage over federal contractors. Conversely, a loss would cement the Pentagon’s authority to sideline vendors that refuse to align with its strategic objectives, reshaping Silicon Valley’s already fraught relationship with Washington.
What happens next in court and on Capitol Hill
Anthropic has requested a preliminary injunction restoring its eligibility within 30 days, arguing that every week of delay costs the company roughly $2.5 million in lost revenue and reputational harm. Judge Tanya Chutkan, who also presided over several high-profile Trump-era cases, set a hearing for June 9 and ordered the government to file its response by May 23.
Parallel to the litigation, House Armed Services Committee ranking member Representative Adam Smith (D-Wash.) announced plans for oversight hearings in July. A Democratic staff memo circulated this week requests documents on how many other firms have been blacklisted under Section 1260H for policy rather than security reasons.
Industry groups are mobilizing: the Information Technology Industry Council plans to file an amicus brief supporting Anthropic, while hawkish nonprofits like the Center for Strategic and International Studies are expected to back the Pentagon. Venture-capital firms including General Catalyst have circulated a petition signed by 42 CEOs warning that unchecked procurement retaliation could drive AI talent away from federal projects entirely.
The global stakes
European regulators are watching closely. The EU’s upcoming AI Act restricts certain military uses, and officials in Brussels worry that U.S. pressure could force American firms to abandon similar ethical commitments abroad. Anthropic’s U.K. subsidiary has already paused talks with the Ministry of Defence pending the lawsuit’s outcome.
Meanwhile, China’s state-backed AI firms are courting Western talent, advertising “no ethical red tape” as a recruiting perk. If U.S. firms feel compelled to drop safeguards to retain federal revenue, American allies could grow reluctant to share sensitive data, fracturing Western AI interoperability just as democracies seek a united front against Beijing’s techno-authoritarian model.
Whatever the verdict, the case is poised to become a touchstone in the global debate over who—governments or corporations—gets to set the moral boundaries for artificial intelligence.
Frequently Asked Questions
Q: Why did Anthropic sue the U.S. Defense Department?
Anthropic claims the Pentagon and Secretary Pete Hegseth exceeded legal authority by branding the AI firm a supply-chain threat and canceling its federal contracts in retaliation for disagreeing on military AI use.
Q: What does the lawsuit mean for federal AI contracts?
The case could set a precedent on whether agencies can blacklist vendors over policy disputes, potentially chilling AI firms that seek to limit lethal or surveillance applications.
Q: Who is Pete Hegseth and what is his role?
Pete Hegseth is the U.S. Secretary of Defense. Anthropic names him personally, arguing he directed the retaliatory designation that cost the company lucrative Pentagon deals.

