AI & Emerging Tech

Anthropic wins support from rival AI researchers in legal battle over Pentagon blacklist

Article cover image

AI researchers from OpenAI and Google DeepMind back Anthropic’s legal challenge against a Pentagon decision that labelled the company a supply-chain risk.

Anthropic has received backing from researchers at rival artificial intelligence companies in its legal challenge against the US government, after the Pentagon designated the company a “supply-chain risk” and cancelled its defence contracts.

More than 30 employees from OpenAI and Google DeepMind — including Google chief scientist Jeff Dean — have filed an amicus brief supporting Anthropic’s lawsuit, warning that the government’s decision could harm the wider US AI sector.

The researchers said the move risks undermining the country’s competitiveness in a strategically critical technology field.

“This effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in artificial intelligence and beyond,” the employees said in the court filing, according to Fortune.

Industry researchers rally behind Anthropic

The intervention by researchers from rival companies represents an unusual moment of solidarity in the fiercely competitive AI industry.

While the employees signed the brief in their personal capacity, the filing signals broader unease among technology workers about how artificial intelligence systems could be deployed in military applications.

The legal brief was submitted shortly after Anthropic filed two lawsuits challenging the Pentagon’s decision to place the company on a supply-chain risk list — a designation historically applied to foreign companies considered potential threats to US military systems.

The designation effectively blocks the company from certain defence contracts and government partnerships.

Anthropic argues the decision is unjustified and risks damaging both the company and the broader US AI ecosystem.

Dispute triggered by military contract negotiations

The dispute between Anthropic and the US government escalated after negotiations over a revised defence contract collapsed.

According to Fortune, the discussions centred on how Anthropic’s AI model, Claude, could be used by the US military.

Anthropic sought to impose two specific restrictions — preventing the technology from being used for domestic mass surveillance and for fully autonomous lethal weapons.

The Pentagon, however, insisted that the military should be allowed to deploy the company’s systems for “all lawful use”.

Anthropic refused to agree to the language. Shortly afterwards, the administration cancelled its contracts and designated the company a national security risk.

The decision triggered the legal challenge now moving through the US courts.

Rivalry and tension across the AI industry

The dispute has also exposed growing tensions between major AI companies over defence partnerships and ethical limits on military applications.

Shortly after Anthropic’s negotiations with the Pentagon collapsed, OpenAI secured its own contract with the US Department of Defence, reportedly accepting terms Anthropic had rejected.

The developments led to a public exchange between the companies’ leaders.

Anthropic chief executive Dario Amodei criticised the deal, calling OpenAI’s approach “safety theatre” and accusing its leadership of misrepresenting the debate around military AI.

OpenAI chief executive Sam Altman responded indirectly, saying it was “bad for society” when companies abandon democratic norms because they disagree with political leadership.

The exchange highlighted a growing divide within the industry over how closely AI companies should collaborate with defence agencies.

Employee activism adds pressure

The amicus brief also reflects a wider wave of employee activism within the technology sector around the military use of AI.

Nearly 900 employees at Google and OpenAI recently signed an open letter urging their companies to reject government requests to deploy AI systems for domestic surveillance or autonomous lethal targeting — the same restrictions Anthropic sought in its contract negotiations.

At least one senior OpenAI employee has already resigned over the issue.

Caitlin Kalinowski, who led hardware and robotics at the company, stepped down after the Pentagon deal was announced. She said the possibility of domestic surveillance without judicial oversight and autonomous weapons without human authorisation required deeper debate.

The controversy echoes earlier disputes within the technology sector. In 2018, employee protests at Google led the company to withdraw from Project Maven, a US military programme using AI to analyse drone surveillance imagery, according to Reuters.

Broader implications for AI governance

Anthropic’s legal challenge could have significant implications for how AI companies interact with governments and defence agencies.

As artificial intelligence becomes increasingly central to national security, policymakers and technology firms are grappling with questions about oversight, ethical limits and commercial competition.

Industry observers say the case may shape how the US government regulates AI partnerships and how companies define acceptable uses of their technology.

For now, the lawsuit has opened an unusual chapter in the AI industry: rival researchers joining forces to defend a competitor in a dispute that could reshape the relationship between Silicon Valley and the Pentagon.

Loading...

Loading...