That is an version of The Atlantic Every day, a e-newsletter that guides you thru the largest tales of the day, helps you uncover new concepts, and recommends one of the best in tradition. Join it right here.
Earlier this week, Secretary of Protection Pete Hegseth sat down with Dario Amodei, the CEO of the main AI agency Anthropic, for a dialog about ethics. The Pentagon had been utilizing the corporate’s flagship product, Claude, for months as a part of a $200 million contract—the AI had even reportedly performed a job within the January mission to seize Venezuelan President Nicolás Maduro—however Hegseth wasn’t happy. There have been sure issues Claude simply wouldn’t do.
That’s as a result of Anthropic had instilled in it sure restrictions. The Pentagon’s model of Claude couldn’t be used to facilitate the mass surveillance of Individuals, nor might or not it’s utilized in totally autonomous weaponry—conditions the place computer systems, moderately than people, make the ultimate resolution about whom to kill. In line with a supply accustomed to this week’s assembly, Hegseth made clear that if Anthropic didn’t get rid of these two guardrails by Friday afternoon, two issues might occur: The Division of Protection might use the Protection Manufacturing Act, a Chilly Battle–period regulation, to primarily commandeer a extra permissive iteration of the AI, or it might label Anthropic a “supply-chain threat,” that means that anybody doing enterprise with the U.S. navy could be forbidden from associating with the corporate. (This penalty is often reserved for international corporations equivalent to China’s Huawei and ZTE.)
This night, Anthropic mentioned in a public assertion that it “can not in good conscience accede” to the Pentagon’s request. What occurs subsequent might mark an important second for the corporate, and for the American authorities’s strategy to AI regulation extra broadly. In refusing to bow to an administration that has been intent on bullying personal corporations into submission, Amodei and his workforce are taking a daring stand on moral grounds, and risking a censure that would erode Anthropic’s long-term viability.
Through the first yr of Donald Trump’s second time period, the White Home had a extra relaxed angle towards AI regulation; an AI Motion Plan from July stresses that the administration will “proceed to reject radical local weather dogma and bureaucratic crimson tape” to encourage innovation. Hegseth is now, in impact, threatening to partially nationalize one of many largest AI gamers within the personal sector—and power the corporate to go towards its personal ideas. “That is essentially the most aggressive AI regulatory transfer I’ve ever seen, by any authorities anyplace on the earth,” Dean Ball, who helped write a number of the Trump administration’s AI insurance policies, informed me.
The Pentagon has already reportedly been reaching out to different protection contractors to see in the event that they’re related to Anthropic, an indication that officers are getting ready to designate the corporate a supply-chain threat. Now that Anthropic has defied Hegseth, the contract is probably going in peril. The agency doesn’t actually need the $200 million—it reportedly pulls in $14 billion a yrand it mentioned it raised $30 billion in enterprise capital simply weeks in the past—however being blacklisted might have an effect on its skill to scale up sooner or later. (“We aren’t strolling away from negotiations,” an Anthropic spokesperson informed The Atlantic in a press release. “We proceed to have interaction in good religion with the Division on a approach ahead.” The Pentagon informed CBS on Tuesday that “this has nothing to do with mass surveillance and autonomous weapons getting used,” and that ”the Pentagon has solely given out lawful orders.”)
As AI corporations world wide jockey for dominance, Anthropic has distinguished itself by emphasizing security. OpenAI’s ChatGPT has been criticized for taking part in up some customers’ delusions, resulting in instances of “AI psychosis,” and simply final month, xAI’s Grok was spinning up almost nude photos of virtually anybody with out consent. (xAI has mentioned it’s limiting Grok from producing these sorts of photos, and OpenAI has mentioned it’s working to make ChatGPT higher assist individuals in misery.) In the meantime, Anthropic’s consumer-facing chatbot doesn’t generate photos in any respect. By refusing to cave to authorities stress, it might have simply averted one other disaster: a significant public backlash from customers, a few of whom see the corporate as a extra principled participant within the AI wars. Anthropic not too long ago confronted some pushback over altering its insurance policies—Time reported on Tuesday that, in a seemingly unrelated transfer, the corporate dropped a core security pledge regarding its broader strategy to AI improvement.
Weeks earlier than Hegseth issued his ultimatum, Amodei opined on his web site about the dangers concerned with exactly the 2 guardrails the Pentagon is concentrating on. “In some instances,” he wrote, “large-scale surveillance with highly effective AI, mass propaganda with highly effective AI, and sure kinds of offensive makes use of of totally autonomous weapons must be thought of crimes towards humanity.”
The Trump administration doesn’t appear to know what it needs from AI. On one hand, it’s deeply suspicious of sure sorts of fashions. The White Home’s designated AI czar, David Sacks, has criticized Anthropic for “operating a classy regulatory seize technique primarily based on fear-mongering,” primarily accusing the agency of pushing for pointless, innovation-squashing limitations and jeopardizing the way forward for American tech. The administration has additionally criticized AI bots for typically spitting out “woke” replies. However, Claude is outwardly helpful sufficient that it’s on the cusp of being commandeered by the federal authorities.
Ball informed me that the Division of Protection might have some extent—that there’s an argument to be made about reining in Silicon Valley’s management over the federal government’s use of latest applied sciences. Though the focus of energy among the many technocratic elite is definitely troubling, Hegseth’s proposed punishments for Anthropic are misguided and plainly contradictory. The Protection Manufacturing Act does enable the federal government to intervene in home industries within the curiosity of nationwide safety (the Biden administration invoked it in a 2023 government order on AI regulation). However is Claude so necessary for U.S. nationwide safety that the federal government must compel Anthropic to create an untethered new model? Or is it so harmful that it must be shunned—not simply by the Pentagon, however by any enterprise related to the navy? A 3rd, even-more-bewildering possibility can also be on the desk: Hegseth might resolve to concurrently fee a modified Claude and sanction the corporate that stewards it.
All of this ignores a a lot less complicated resolution: Hegseth might simply begin a partnership with a special agency. It’s time for his division to be in enterprise with tech, for the reason that temper of Silicon Valley has these days grow to be way more Pentagon-friendly. Palantir’s Alex Karp has touted that his software program is used “to scare our enemies and, once in a while, kill them”; the technologist and entrepreneur Palmer Luckey is already constructing autonomous weaponry for the federal government; and Andreessen Horowitz’s American Dynamism funds are serving to funnel the nation’s high younger minds into protection tech. However moderately than look elsewhere, Hegseth is threatening to crush Anthropic—implying that if he can’t management Claude, nobody can.
Because the protection secretary appears to be like to make an instance of the corporate, he’s taking a cue from Trump, who has used authorized and extralegal stress to successfully power different personal companies, significantly huge regulation corporations, banks, and universities, into submission. These acts of coercion have the potential to reshape American capitalism: We’re starting to see a market the place winners and losers are determined much less by the standard of their merchandise and extra by their seeming fealty to the White Home. How that may have an effect on the success of companies and the financial system is unsure.
The Pentagon created this ultimatum exactly as a result of it understands Anthropic’s world-altering potential. The administration simply can’t resolve if it’s an asset, a legal responsibility, or each.
Associated:
Listed below are three new tales from The Atlantic:
At this time’s Information
- A Columbia College scholar detained this morning by federal immigration brokers has been launched. The arresting officers reportedly misrepresented themselves as in search of a lacking youngster with a view to achieve entry to the coed’s residential constructing.
- Hillary Clinton informed the Home Oversight Committee that she has no new details about Jeffrey Epstein and maintained that she had no data of his crimes; she criticized congressional Republicans’ dealing with of the probe as partisan. Invoice Clinton is scheduled to provide his deposition tomorrow.
- Cuban forces killed 4 individuals and wounded six after firing on a Florida-registered speedboat that Cuban authorities say entered the nation’s waters yesterday and opened fireplace on a patrol vessel. Cuba claims that the U.S.-based passengers had been armed and planning a “terrorist” infiltration.
Dispatches
Discover all of our newsletters right here.
Extra From The Atlantic
Night Learn

This Seems Like an Insider Guess on Aliens
By Ross Andersen
On Monday evening, somebody positioned a peculiar wager on the prediction market Kalshi. At 7:45 p.m. japanese time, a single dealer put down almost $100,000 on the declare that, by the top of December, the Trump administration will verify that alien life or expertise exists elsewhere in our universe. In line with The Atlantic’s evaluate of Kalshi’s buying and selling information, about 35 minutes after this wager was executed, it was adopted by one other that was nearly twice as giant (probably from the identical individual). These had been market-moving occasions: For one temporary stretch, the market appeared to suppose that there was not less than a one-in-three likelihood that the U.S. authorities will announce the existence of aliens this yr. Maybe this was just a few overexcited UFO diehard with a hunch and cash to burn. Or possibly, as some observers shortly famous, it was a dealer with inside data.
Tradition Break

Discover. When did literature get much less soiled? A puritan pressure is manifesting in realist novels as a marked absence of straight intercourse, Lily Meyer writes.
Learn. Casey Schwartz on two new books that show how Martha Gellhorn, Janet Flanner, and different feminine reporters took journalism in instructions that males couldn’t.
Rafaela Jinich contributed to this article.
Once you purchase a e book utilizing a hyperlink on this e-newsletter, we obtain a fee. Thanks for supporting The Atlantic.
