Brendan Steinhauser, C.E.O. of The Alliance for Secure A.I., suggested that this pivot had latently been in process for a while. “I definitely think there has been a shift,” he told me. “But I was hearing probably two, three months ago that there was starting to be movement in the White House toward what I would call our side: a focus on safety and security.” He added that a lot of the momentum was coming from the Office of the Vice President, and that people within J.D. Vance’s orbit “had grave concerns about some of the
technology and the capabilities.” They and others, including on the National Security Council and in the West Wing, “were sort of pushing the White House to go a certain direction, or at least to take this stuff more seriously,” he said.
By Ian Krietzberg
| Is the White House about to make a massive pivot on A.I.? That was the question pinging around Washington last week after The New York Times reported that President Trump was weighing executive actions concerning government oversight for new models. In an appearance on Fox News, National Economic Council director Kevin Hassett suggested that future A.I. models might go through an F.D.A.-like preapproval process. “We’re studying possibly an executive order to give a clear roadmap to everybody about how this is going to go,” he said.Even for an administration that seems to change its mind daily, that would constitute an unusual reversal. After all, the president kicked off his second term by unwinding just about every one of Joe Biden’s A.I. policies—including an executive order establishing voluntary predeployment safety testing for frontier A.I. developers. He also appointed a since-departed A.I. czar, David Sacks, to serve as a conduit between D.C. and Silicon Valley, and issued an “A.I. Action Plan” that could have been written by the hyperscalers’ lobbyists. But ever since getting a peek under the hood of Mythos, the secretive Anthropic model that is ostensibly too powerful to release to the public, the Trump team seems to have changed its tune. “What we’ve had in the past month was a step change in the power of one large language model, but we’re going to see it from the other A.I. companies,” Treasury Secretary Scott Bessent recently said on Fox News. “What we’re determined to do is work with our A.I. companies to allow them to continue to innovate, but our charge in the U.S. government is maintaining safety.” The comments coming out of the administration created such a buzz that White House chief of staff Susie Wiles felt compelled to weigh in, issuing only her fourth-ever X post on her official account. “The White House,” Wiles noted, “will continue to lead an America First effort that empowers America’s great innovators, not bureaucracy, to drive safe deployment of powerful technologies while keeping America safe.” So… would there actually be no F.D.A.-like agency for A.I., after all? Asked for comment, a White House spokesperson told me that any policy announcement would come directly from the president, and that any discussion around possible executive orders is, at this point, speculative. |
“Everything Is Politics”
Adam Kovacevich, the C.E.O. of Chamber of Progress, a tech-industry trade group, told me
that there were two main drivers behind the pivot: the recent departure of Sacks, and the arrival of Mythos. “I think what you’re seeing is some of the normal chaos around factions of Trumpworld: throwing ideas out there, floating ideas in the press, then having to walk them back,” he said. “When there’s a new power vacuum, different factions try to fill it.” Bessent, in particular, may be attempting to step into Sacks’s shoes.
Brendan Steinhauser, C.E.O. of The Alliance for Secure A.I., suggested that this pivot had latently been in process for a while. “I definitely think there has been a shift,” he told me. “But I was hearing probably two, three months ago that there was starting to be movement in the White House toward what I would call our side: a focus on safety and security.” He added that a lot of the momentum was coming from the Office of the Vice President, and that people within J.D. Vance’s orbit “had grave concerns about some of the
technology and the capabilities.” They and others, including on the National Security Council and in the West Wing, “were sort of pushing the White House to go a certain direction, or at least to take this stuff more seriously,” he said.
This, according to The Washington Post, has blossomed into a turf war, with the Commerce
Department and national security aides at loggerheads over what government oversight ought to look like.
Whether acting out of genuine concern or political expediency, either explanation makes sense. Mythos is, by all accounts, an extremely capable model that has unnerved some of the people who have worked with it, and, according to several sources, its release was a bit of a watershed moment for operators in D.C. Poll after poll, meanwhile, shows that Americans are distrustful of A.I.
and worried that the government won’t do enough to protect them from adverse consequences. And Vance, among others in the White House, is presumably thinking about his own political future.
But no matter the root cause or causes, the administration has certainly chosen a conspicuous time to start shifting course. Public distrust and anxiety around A.I. and megacap tech companies has surged in recent months—which, in turn, has already
sparked a tonal change from OpenAI and sent both Democratic and Republican candidates and governors scrambling to respond appropriately. Meanwhile, the White House is quietly seeking a détente with Anthropic, which the Pentagon essentially blacklisted earlier this year. Last month, shortly after announcing Mythos, C.E.O. Dario Amodei met with Wiles and Bessent—a summit that officials described as “productive and constructive.”
For his part, Kovacevich doesn’t think the administration is simply bending to popular will—midterms notwithstanding. “This White House tends not to change its position based on polling results,” he told me. “I think what happened was banks and other industries came to the White House with a lot of concern about Mythos, and a panic began.” Charlie Bullock, a senior research fellow at the Institute for Law & AI, agreed that it’s likely a response to the business community, and bank lobbyists in particular—although, he added, “Everything is politics to some extent.”
| NatSec Nightmares A handful of companies have already agreed to engage with the government on A.I. oversight. Last week, the Center for AI Standards and Innovation, which sits within the National Institute of Standards and Technology, announced agreements with Microsoft, xAI, and Google DeepMind that will let the government do voluntary predeployment evaluations of their models. Details on the arrangement, beyond the fact that it will build upon previous partnerships between CAISI, OpenAI, and Anthropic, are scarce. (Oddly, the NIST article announcing the partnership is no longer publicly available, and neither the organization or any of the companies involved responded to requests for comment—the Post reported that the website was removed due to “sensitivity within the White House’s Office of the National Cyber Director,” a sign of the tension within the administration.) According to Connor Leahy, the U.S. director for ControlAI, the administration’s changing tack is due mostly to “thinking of A.I. not as a commercial issue, but as a national security issue.” In other words, if the government were still prioritizing commercial interests, hindering innovation would be seen as costly. But as the focus shifts to national security (and cybersecurity in particular), not having those restrictions could be far more costly. “In retrospect, it kind of makes sense that once the N.S.A.’s top analysts were looking at this and saying, ‘Okay, this is not a speculative risk,’ people would jump into action,” Bullock said. “I think that’s consistent with an administration that takes national security seriously and is sort of just waking up to the risks that these A.I. systems pose.” |