# Anthropic refuses to allow military use of its AI by US government

## Conflict between Trump administration and AI startup intensifies as deadline approaches

The standoff between the US government under President Donald Trump and artificial intelligence startup Anthropic has escalated significantly in recent days, after US Secretary of Defense Pete Hegseth gave the company an ultimatum to release its AI model, the Claude chatbot, for unrestricted military use.

PUBLICIDADE

The company resists loosening its rules and states it will not allow the technology to be applied in fully autonomous weapons or mass domestic surveillance.

According to reports published by the Associated Press (AP) and outlets such as The Wall Street Journal, the government set a deadline until this Friday (27) for the company to accept the terms proposed by the Pentagon.

If not, Hegseth threatened to classify Anthropic as a "supply chain risk" — a measure that could exclude it from government contracts — or invoke the Defense Production Act (DPA), a Cold War-era instrument that grants the president emergency powers to intervene in the economy in the name of national security.

In a statement on Thursday (26), Anthropic's CEO, Dario Amodei, said the company "cannot, in good conscience," allow the Department of Defense to use its models "in all lawful use cases without limitation." He added that the department's threats "do not change our position."

"It is the prerogative of the Department to select contractors more aligned with its vision," Amodei wrote. "But, given the substantial value that Anthropic's technology provides to our armed forces, we hope they will reconsider." The executive also stated: "Our strong preference is to continue serving the Department and our soldiers — with our two safety measures in place."

Should the Pentagon choose to remove the company from its contracts, Anthropic will work to ensure a smooth transition to another supplier, "avoiding any anomalies in military plans and operations or other critical missions," he added.

## Red lines: autonomous weapons and mass surveillance

Anthropic maintains that it cannot loosen restrictions against the use of its technology in fully autonomous weapons or mass domestic surveillance systems.

Also in the statement, Amodei declared that "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."

He added that certain uses "are also beyond what current technology can do safely and reliably," specifically citing autonomous weapons and mass surveillance.

The Pentagon, for its part, claims it has no interest in using Anthropic's models for fully autonomous weapons or mass domestic surveillance of Americans — a practice that, according to Pentagon spokesperson Sean Parnell, is illegal.

Still, the department demands that the contract allow the use of technology for "all lawful purposes."

"This is a simple, sensible request that will prevent Anthropic from compromising critical military operations and potentially putting our combat troops at risk. We will not allow any company to dictate the rules for how we make operational decisions," Parnell wrote in a post on X.

According to US authorities cited in the American press, the Department of Defense sent the company its "final offer" on Wednesday night (25), establishing a final deadline of 3:01 PM (local time) on Friday (27) for Anthropic to accept the terms.

## Claude used in Venezuela operation

According to the Journal, US military forces used Claude in the operation in Venezuela that resulted in the capture of Venezuelan President Nicolás Maduro. Neither Anthropic nor the Department of Defense commented officially on the case, and it is unclear how the system was employed.

The company prohibits the use of its AI for violent purposes. In an essay published last month, Amodei warned about the risks of powerful AI applied to surveillance: "A powerful AI analyzing billions of conversations from millions of people could measure public sentiment, detect forming foci of disloyalty and eliminate them before they grow."

## Pressure and possible sanctions

Should it be classified as a "supply chain risk," Anthropic could face broad import restrictions, be prevented from participating in bids, and be excluded from sectors considered vital to national security.

The DPA would allow the government to compel the company to make its technology available to the Pentagon, under penalty of fines, criminal sanctions, loss of contracts, asset seizure, or even direct federal intervention. In return, companies under the DPA receive antitrust protection and priority access to supplies.

"If they don't collaborate, [Hegseth] will ensure the Defense Production Act is applied to Anthropic, forcing it to be used by the Pentagon regardless of whether it wants to or not," a senior Defense Department official told the Financial Times.

The Pentagon has already initiated movements indicating possible preparation for a break. According to reports, the Defense Department began contacting major sector contractors, such as Boeing and Lockheed Martin, to assess their exposure to Anthropic products.

## Billion-dollar contracts and competition

In July 2025, the Department of Defense granted Anthropic, Google, OpenAI, and xAI a $200 million contract (R$1 billion) to develop "advanced AI capabilities that improve US national security." The company was the first to integrate its models into mission flows on classified networks, where it works with partners like Palantir.

According to analysts, Anthropic's rivals, such as Meta, Google, and xAI, agreed to allow the use of their models for all legal applications of the department, which limits Anthropic's bargaining power.

## Ethical debate and government intervention in Anthropic

Founded in 2021 by former OpenAI employees, Anthropic presents itself as a company focused on safety. Amodei has written that the company was created "with a simple principle: AI should be a force for human progress, not for danger."

In a recent essay, he stated that "we are considerably closer to real danger in 2026 than we were in 2023," arguing that risks should be managed in a "realistic and pragmatic way."

Experts assess that the threat to use the Defense Production Act against an AI company would be unprecedented. Geoffrey Gertz, from the Center for a New American Security think tank, said he is concerned about the impact on the company's development.

"There is great concern that the government will take actions that harm Anthropic's ability to continue at the forefront of responsible AI. Actions that attempt to restrict Anthropic's potential markets can be very harmful and may end up having the opposite effect to what the government wants with its AI policy," he stated.

For Amos Toh, from the Brennan Center at New York University (USA), the Pentagon's rapid adoption of AI evidences the need for greater legislative oversight. "The law does not keep pace with the speed of technological evolution. But that does not mean the Department of Defense has a blank check," he wrote.

The case exposes not only the debate about the ethical limits of AI in military and surveillance contexts, but also the Trump administration's willingness to directly intervene in corporate decisions in sectors considered strategic.

As the deadline imposed by the Pentagon approaches, Anthropic maintains its position that it will not give up the safeguards it considers essential for the responsible use of its technology.