Anthropic Pentagon Clash Highlights Big Tech’s Changing Stance on AI and Warfare
A growing dispute between Anthropic and the U.S. Department of Defence (DoD) is forcing the technology industry to confront an old but increasingly urgent question: how far should artificial intelligence companies go when their technologies are used for military purposes?
The conflict illustrates a dramatic shift in Silicon Valley’s approach to defence partnerships. Less than a decade ago, cooperation with the military sparked employee protests across major tech companies. Today, many of those same companies are signing lucrative defence contracts and integrating AI tools into government systems.
The dispute between Anthropic and the Pentagon is now emerging as a flashpoint that reflects how dramatically industry attitudes toward AI and warfare have evolved.
Anthropic Challenges Pentagon Blacklisting
Tensions escalated recently when Anthropic filed a lawsuit against the Department of Defence, arguing that the government violated its First Amendment rights by blacklisting the company from federal work.
The legal battle comes after months of disagreement over how the military can use Anthropic’s AI models. The company has attempted to restrict its technology from being deployed in domestic mass surveillance programmes or fully autonomous lethal weapons systems.
According to Anthropic, the Pentagon pushed for broader access to the company’s AI systems under a policy that would permit “any lawful use” of the technology. The firm resisted that demand, warning that removing key safety restrictions could open the door to misuse.
Anthropic has framed its stance as a commitment to the ethical guardrails that guided the company’s founding, effectively drawing a boundary that other AI developers must now decide whether they are willing to respect.
Yet the dispute also reveals how blurred the ethical lines around military AI have become across the broader technology industry.
If people are looking for good guys and bad guys, where a good guy is someone who doesn’t support war, said Margaret Mitchell, AI researcher and chief ethics scientist at Hugging Face, then they’re not going to find that here.
From Employee Protests to Defense Partnerships
The tech sector’s relationship with the military has shifted significantly over the past decade.
In 2018, thousands of employees at Google protested the company’s involvement in Project Maven, a U.S. Department of Defence programme that used machine learning to analyse drone surveillance footage.
More than 3,000 workers signed an open letter at the time declaring:
We believe that Google should not be in the business of war.
The backlash was so intense that Google ultimately chose not to renew its participation in Project Maven. The company also introduced ethical guidelines stating that it would not pursue technologies designed to cause or directly facilitate injury to people.
In recent years, Google has revised its policies, removing language that explicitly prohibited building technology for weapons applications. The company has also signed multiple contracts allowing military organisations to use its AI tools.
Employee activism has also faced stricter controls. In 2024, Google dismissed more than 50 staff members who protested the company’s ties to the Israeli government. Following the firings, CEO Sundar Pichai reminded employees that Google was a business rather than a platform for debating political issues.
The company has since expanded its military engagement. This week, Google announced that its Gemini AI platform would be made available to the US military for developing AI agents used in unclassified government projects.
OpenAI and Other AI Firms Join the defence ecosystem
Google is far from the only AI company moving closer to the defence establishment.
OpenAI, which previously prohibited military use of its models, softened its stance in 2024. The company has since deepened its engagement with the U.S. military, including appointing its chief product officer as a lieutenant colonel in the U.S. military’s “executive innovation corps.
OpenAI also joined Google, Anthropic, and xAI in securing a contract worth up to $200 million with the Department of Defence to integrate artificial intelligence into military systems.
In a twist highlighting the shifting alliances in the industry, OpenAI secured a new DoD contract for classified systems on the same day that Defence Secretary Pete Hegseth declared Anthropic a supply chain risk.
Meanwhile, companies that openly prioritise defence partnerships such as Anduril and Palantir have grown increasingly influential in Silicon Valley’s policy debates.
Palantir, which has long worked with military intelligence agencies, gained early prominence by helping analyse data used to identify roadside bombs in Afghanistan during the early 2010s.
Its CEO, Alex Karp, has been outspoken about the need for deeper collaboration between technology firms and the military. In a book published last year, Karp criticised Google employees who protested Project Maven, describing their opposition as misguided.
After Google exited the program, Palantir took over Project Maven, which has since evolved into a classified intelligence system used by military personnel. According to reporting by The Washington Post, that system now provides access to Anthropic’s Claude AI model.
Anthropic’s Position: Cooperation With Limits
Despite the public nature of its conflict with the Pentagon, Anthropic’s leadership insists that the company still supports many forms of military collaboration.
In a recent blog post, Dario Amodei, Anthropic’s co-founder and CEO, suggested that the company and the U.S. government share many strategic goals.
Anthropic has much more in common with the Department of War than we have differences, Amodei wrote.
His views reflect a broader philosophy that attempts to balance national security interests with safeguards against misuse.
In an essay published earlier this year, Amodei warned about several potential dangers associated with advanced AI, including the risk of bioweapon development and concerns about adversarial governments, particularly China, gaining technological advantages.
At the same time, he argued that democratic nations should be equipped with cutting-edge AI tools to counter authoritarian rivals.
The greater concern, he suggested, is not simply the use of AI in warfare but the concentration of power among a small group of decision-makers controlling autonomous systems.
Where Anthropic Draws the Line
Anthropic’s primary objection appears to focus on two specific applications: mass domestic surveillance and fully autonomous lethal weapons.
Outside those areas, the company has signalled broad willingness to cooperate with the U.S. military.
According to court filings in Anthropic’s lawsuit, the company has already created a specialised version of its AI model known as Claude Gov that is tailored for government use.
Anthropic does not impose the same restrictions on the military’s use of Claude as it does on civilian customers, the lawsuit states.
The system is designed to be less restrictive in handling sensitive requests, including tasks related to classified materials, military planning, and threat analysis.
Reports indicate that the U.S. government has used Claude for target selection and analytical support in operations related to its bombing campaign against Iran, an application Anthropic has not publicly objected to.
Amodei has insisted, however, that Anthropic’s technology is not involved in direct operational decision-making.
We have said to the Department of War that we are OK with all use cases, Amodei told CBS News. Basically 98 or 99% of the use cases they want to do, except for two.
The Bigger Debate Over AI and Military Power
The clash between Anthropic and the Pentagon reflects a deeper transformation underway in the tech industry.
Where once many Silicon Valley engineers viewed military collaboration as ethically unacceptable, geopolitical competition, rising defence budgets, and the commercial potential of government contracts have reshaped the landscape.
Today, artificial intelligence is increasingly seen not just as a commercial technology but as a strategic asset in global power competition.
The question facing the industry is no longer whether AI will play a role in warfare but where companies choose to draw the boundaries of that involvement.

