Printable Page Headline News   Return to Menu - Page 1 2 3 5 6 7 8 13
 
 
Anthropic Refuses to Bend to Pentagon  02/27 06:26

   

   (AP) -- A public showdown between the Trump administration and Anthropic is 
hitting an impasse as military officials demand the artificial intelligence 
company bend its ethical policies by Friday or risk damaging its business.

   Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the 
deadline, declaring his company "cannot in good conscience accede" to the 
Pentagon's final demand to allow unrestricted use of its technology.

   Anthropic, maker of the chatbot Claude, can afford to lose a defense 
contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed 
broader risks at the peak of the company's meteoric rise from a little-known 
computer science research lab in San Francisco to one of the world's most 
valuable startups.

   If Amodei doesn't budge, military officials have warned they will not just 
pull Anthropic's contract but also "deem them a supply chain risk," a 
designation typically stamped on foreign adversaries that could derail the 
company's critical partnerships with other businesses.

   And if Amodei were to cave, he could lose trust in the booming AI industry, 
particularly from top talent drawn to the company for its promises of 
responsibly building better-than-human AI that, without safeguards, could pose 
catastrophic risks.

   Anthropic said it sought narrow assurances from the Pentagon that Claude 
won't be used for mass surveillance of Americans or in fully autonomous 
weapons. But after months of private talks exploded into public debate, it said 
in a Thursday statement that new contract language "framed as compromise was 
paired with legalese that would allow those safeguards to be disregarded at 
will."

   That was after Sean Parnell, the Pentagon's top spokesman, posted on social 
media that "we will not let ANY company dictate the terms regarding how we make 
operational decisions" and added the company has "until 5:01 p.m. ET on Friday 
to decide" if it would meet the demands or face consequences.

   Emil Michael, the defense undersecretary for research and engineering, later 
lashed out at Amodei, alleging on X that he "has a God-complex" and "wants 
nothing more than to try to personally control the US Military and is ok 
putting our nation's safety at risk."

   That message hasn't resonated in much of Silicon Valley, where a growing 
number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced 
support for Amodei's stand late Thursday in an open letter.

   OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply 
their AI models to the military.

   "The Pentagon is negotiating with Google and OpenAI to try to get them to 
agree to what Anthropic has refused," the open letter says. "They're trying to 
divide each company with fear that the other will give in."

   Also raising concerns about the Pentagon's approach were Republican and 
Democratic lawmakers and a former leader of the Defense Department's AI 
initiatives.

   "Painting a bullseye on Anthropic garners spicy headlines, but everyone 
loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media 
post.

   Shanahan faced a different wave of tech worker opposition during the first 
Trump administration when he led Maven, a project to use AI technology to 
analyze drone footage and target weapons. So many Google employees protested 
its participation in Project Maven at the time that the tech giant declined to 
renew the contract and then pledged not to use AI in weaponry.

   "Since I was square in the middle of Project Maven & Google, it's reasonable 
to assume I would take the Pentagon's side here," Shanahan wrote Thursday on 
social media. "Yet I'm sympathetic to Anthropic's position. More so than I was 
to Google's in 2018."

   He said Claude is already being widely used across the government, including 
in classified settings, and Anthropic's red lines are "reasonable." He said the 
AI large language models that power chatbots like Claude are also "not ready 
for prime time in national security settings," particularly not for fully 
autonomous weapons.

   "They're not trying to play cute here," he wrote.

   Parnell asserted Thursday that the Pentagon wants to " use Anthropic's model 
for all lawful purposes" and said opening up use of the technology would 
prevent the company from "jeopardizing critical military operations," though 
neither he nor other officials have detailed how they want to use the 
technology.

   The military "has no interest in using AI to conduct mass surveillance of 
Americans (which is illegal) nor do we want to use AI to develop autonomous 
weapons that operate without human involvement," Parnell wrote.

   When Hegseth and Amodei met Tuesday, military officials warned that they 
could designate Anthropic as a supply chain risk, cancel its contract or invoke 
a Cold War-era law called the Defense Production Act to give the military more 
sweeping authority to use its products, even if the company doesn't approve.

   Amodei said Thursday that "those latter two threats are inherently 
contradictory: one labels us a security risk; the other labels Claude as 
essential to national security." He said he hopes the Pentagon will reconsider 
given Claude's value to the military, but, if not, Anthropic "will work to 
enable a smooth transition to another provider."

 
 
Copyright DTN. All rights reserved. Disclaimer.
This material has been prepared by a sales or trading employee or agent of B.I.S. Commodities and is, or is in the nature of, a solicitation.
For full disclaimer click here
Powered By DTN