From Operation Absolute Resolve to Absolute Uncertainty: The Battle Between Anthropic and the U.S. Military
March 3, 2026
Jack Zhou
We have a weekly newsletter, delivered straight to your inbox!
March 3, 2026
Jack Zhou
On Friday, President Trump ordered the federal government to stop using Anthropic’s products, and Defense Secretary Pete Hegseth designated the company a “supply chain risk to national security.” However, this latest fallout is not the first of its kind, and rather just one example of the long and often fraught relationship between Silicon Valley and the American Military.
A Relationship Defined by Tension
Anthropic is not the first AI company to have engaged with the military. In 2017, Project Maven, a Department of Defense initiative, began with Google offering AI technology to analyze drone footage. However, this collaboration was not received well by the company. Over 4,000 Google employees signed a letter to CEO Sundar Pichai demanding the contract be canceled, stating “Google should not be in the business of war.” A dozen employees resigned in protest, and Google ultimately declined to renew the contract. In the aftermath, Google also published AI principles pledging not to build technologies for weapons or surveillance that violate international norms.
However, the Pentagon’s work did not stop with the end of Google’s contract. Microsoft and Amazon quietly picked up Project Maven subcontracts worth roughly $50 million. Companies like Palantir and Anduril stepped in as well. The Pentagon sent a clear message: when one company exits, others fill the void. Former Amazon CEO Jeff Bezos captured the industry’s other perspective succinctly: “If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble.”
Anthropic Enters the Pentagon
When Anthropic first entered the Pentagon, it engaged on its own terms. In July 2024, Anthropic partnered with Palantir to bring Claude into government intelligence and defense operations. A year later, Anthropic signed a $200 million contract with the Department of Defense, one that included a critical caveat within its acceptable use policy: Claude could not be used for mass domestic surveillance or fully autonomous weapons. Anthropic CEO Dario Amodei has since framed this as responsible engagement, stating that “democracies have a legitimate interest in some AI-powered military and geopolitical tools,” but that these should be deployed “carefully and within limits.”
Tensions flared in February 2026 after reports emerged that Claude had been used in Operation Absolute Resolve, when the US military captured Venezuelan President Nicolás Maduro in January. The operation, which involved strikes across Caracas and resulted in at least 83 deaths, reportedly utilized Claude through Anthropic’s partnership with Palantir. An Anthropic employee’s inquiry to Palantir about how exactly Claude was used during the operation sparked alarm at the Pentagon, which interpreted the question as Anthropic potentially seeking to veto military operations.
The Pentagon’s Position
The Department of War’s stance is unambiguous: no private company should dictate how the military can use contracted technology. Emil Michael, the Pentagon’s undersecretary for research and engineering, called Amodei “a liar” with a “God complex” who was “putting our nation’s safety at risk.” Pentagon spokesman Sean Parnell set a deadline of 5:01 PM on Friday, February 27, for Anthropic to accept unrestricted use of Claude for “any lawful purpose.” The Pentagon maintained that federal law already prohibits mass surveillance and autonomous weapons, making Anthropic’s conditions redundant. Hegseth declared that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.”
Anthropic’s Position
In a lengthy public statement on Thursday, Amodei drew a clear line. “Anthropic understands that the Department of War, not private companies, makes military decisions,” he wrote. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” Amodei argued that current AI technology is not reliable enough to power fully autonomous weapons, and that AI-driven mass surveillance poses “serious, novel risks to our fundamental liberties.” He also pointed out the contradiction in the Pentagon’s threats: “One labels us a security risk; the other labels Claude as essential to national security.” When the deadline passed, Anthropic did not budge. Amodei offered to help with a smooth transition to another provider, saying the company’s preference was to continue serving the military, with safeguards in place.
Another Day, Another AI
Similar to what happened after Google left Project Maven, Anthropic was immediately replaced after ending its contract. In its place came its biggest rival. Just hours after Trump ordered the government to stop using Anthropic, OpenAI announced it struck a deal with the Defense Department to employ its technology for classified networks.
Implications and the Future of AI in the Military
The consequences of this standoff are far-reaching. First, the ban sets a precedent for how the government treats AI companies that push back on military demands. The “supply chain risk” designation—historically reserved for foreign adversaries—has never been applied to an American company and could deter other firms from imposing ethical constraints on government contracts. That is exactly why Anthropic has announced that it will fight the designation in courts.
Second, the episode exposes fault lines across the AI industry. While OpenAI moved to fill Anthropic’s role, CEO Sam Altman stated he shares Anthropic’s “red lines” on surveillance and autonomous weapons—seeking similar language in his own Pentagon deal. If the Pentagon forces Anthropic out but faces the same demands from rivals, the administration may find that this battle is not over. While the switch from Google to other big companies in Project Maven was smooth, these developments are both more uncertain and have bigger implications. AI will play a role in the US military no matter what, but it is up to these AI companies and the federal government to decide what that role is.
Read more here: