Hegseth is pushing Anthropic to give military broader access to its AI technology, AP source says

[keyword]


WASHINGTON (AP) — Defense Secretary Pete Hegseth gave Anthropic’s chief executive a Friday deadline to open the company’s artificial intelligence technology to unrestricted military use or risk losing its government contract, according to a person familiar with their meeting Tuesday.

Anthropic makes the chatbot Claude and is the last of its peers that does not use its technology to a new US military internal network. CEO Dario Amodi has repeatedly made his clear ethical concerns on uncontrolled government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that can detect discrepancies.

Defense officials warned that they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products, even if it doesn’t approve of how they are used, according to the person familiar with the meeting and a senior Pentagon official, both of whom were not authorized to comment publicly and spoke on condition of anonymity.

The development, previously reported by Axios, underscores the debate over AI’s role in national security and concerns about how the technology could be used in high stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth promised eradicating what he calls a “woke culture”. in the armed forces.

“A powerful AI that looks over billions of conversations from millions of people can gauge public sentiment, detect pockets of disloyalty and stamp them out before they grow,” Amodei wrote in an essay last month.

The official called the tone of the meeting cordial, but said Amodei did not move on two areas he established because the lines Anthropic will not cross — fully autonomous military target operations and domestic surveillance of American citizens.

The Pentagon objects to Anthropic’s ethical restrictions because military operations require tools that don’t come with built-in restrictions, the senior Pentagon official said. The official argued that the Pentagon had only issued legal orders and stressed that use of Anthropic’s tools would legally be the military’s responsibility.

Anthropic will no longer be the only AI company approved for classified military networks

The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.

Anthropic was the first AI company approved for classified military networks, where it works with partners like Palantir. Musk’s xAI company, which operates the Grok chatbot, says Grok is also ready to be used in classified environments, according to the senior Pentagon official.

The official noted that the other AI companies were “close” to that milestone. SpaceX, Musk’s spaceflight company that recently merged with xAI, did not immediately return a request for comment on Tuesday.

In a January speech at SpaceX in South Texas, Hegseth said he would stop any AI models “that won’t let you fight wars.”

Hegseth said his vision for military AI systems meaning they operate “without ideological constraints that limit legitimate military applications,” before adding that the Pentagon’s “AI will not be awakened.”

The defense secretary said that Grok would connect to the secure but unclassified Pentagon AI network, called GenAI.mil. The announcement comes days after Grok – which is embedded in X, the social media network owned by Musk – faced global scrutiny for generating highly sexualized deeply false images of people without their consent.

OpenAI announced in early February that it would also join GenAI.mil, which allows service members to use a customized version of ChatGPT for unclassified tasks.

Antropies calls itself more security oriented

Anthropic said in a statement after Tuesday’s meeting that it has “continued good faith discussions about our usage policy to ensure that Anthropic can continue to support the government’s national security mission consistent with what our models can reliably and responsibly do.”

Anthropic has long positioned itself as the most responsible and security-focused of the leading AI companies, since its founders left OpenAI to form the startup in 2021.

The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technologies.

“Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all legal applications,” Daniels said. “The company’s bargaining power here is therefore limited, and it risks losing influence in the department’s effort to adopt AI.”

In the AI craze It followed the release of ChatGPT, Anthropic now aligned with President Joe Biden’s Democratic administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.

Amodei, the CEO, warned AI’s potentially catastrophic dangers while rejecting the label of being an AI “doomsday”. He argued in the January essay that “we are significantly closer to real danger in 2026 than we were in 2023,” but that those risks must be managed in a “realistic, pragmatic manner.”

Anthropic has been at odds with the Trump administration

It wouldn’t be the first time Anthropic’s advocacy for stricter AI safeguards has put it at odds with President Donald Trump’s administration. Anthropic needle chipmaker Nvidia publicly criticizes Trump’s proposals to relax export controls to enables some AI computer chips to be sold in China. However, the AI ​​company remains a close partner with Nvidia.

Trump’s Republican administration and Anthropic have also been on opposite sides of a push to regulate AI in US states.

Trump’s top AI adviser, David Sacks, accused Anthropic in October of using “a sophisticated regulatory capture strategy based on fear mongering.”

Responding to X on Anthropic co-founder Jack Clark, Sacks wrote about his attempt to balance technological optimism with “appropriate fear” about the steady march toward more capable AI systems.

Anthropic hired a number of former Biden officials shortly after Trump’s return to the White House, but it also sought to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

The Pentagon’s “breakneck” adoption of AI shows the need for greater AI oversight or regulation by Congress, especially if AI is used to monitor Americans, said Amos Toh, senior adviser at the Brennan Center’s Liberty and National Security Program at New York University.

“The law is not keeping up with how fast the technology is evolving,” Toh wrote in a post on Bluesky. “But that doesn’t mean DoD has a blank check.”

___

O’Brien reports from Providence, RI





Dhakate Rahul

Dhakate Rahul

Leave a Reply

Your email address will not be published. Required fields are marked *