The reaction has split along predictable lines. Industry groups like the National A.I. Association, Business Software Alliance, and Business Roundtable, as well as Build American A.I., all cheered the idea of a single national standard as opposed to a patchwork of 50. The usual critics—Common Sense Media, the Alliance for Secure A.I., the Future of Life Institute, and the Tech Oversight Project, to name a few—lambasted Trump’s plan for the exact same reason.
Welcome to The Hidden Layer. I’m Ian Krietzberg, in D.C. for a few days to check out that new movie—The A.I. Doc: Or How I Became an Apocaloptimist. (Say that five times fast.) More on that soon.
In the meantime, I caught up with New York Gov. Kathy Hochul to chat about her approach to A.I., as well as the complex politics of a technology that most Americans distrust. Naturally, everybody’s trying to have their A.I. cake and regulate it, too. Plus, up top, news and notes on Trump’s new, probably dead-on-arrival A.I. policy framework, and Mark Zuckerberg’s latest agentic experiment.
Programming note: I will be in San Francisco the week of April 6 for HumanX, where I’ll be moderating a few panels and getting a pulse check on the industry. Drop me a line if you want to meet up!
Also mentioned in this issue: Ian Bremmer, Asad Ramzanali, Abigail Spanberger, Bernie Sanders, Alex Bores, Marsha Blackburn, Charlie Bullock, Sam Liccardo, Miles Brundage, Adam Thierer, Josh Gottheimer, Emmett Shear, Gavin Newsom, Josh Hawley, John McAuliff, Luke Bronin, and more…
Let’s get into it… Three Takeaways From Trump’s Policy Plan
1. The moratorium is back! Sort of: President Trump’s new A.I. policy framework opens with crowd-pleasing priorities such as protecting kids, guarding against higher electric bills, and even protecting some creative rights. But the real headline is at the end: The framework also calls for federal preemption of state-level A.I. regulation. The tech industry has, of course, lobbied relentlessly for this, with previous backing from the White House.
The reaction has split along predictable lines. Industry groups like the National A.I. Association, Business Software Alliance, and Business Roundtable, as well as Build American A.I., all cheered the idea of a single national standard as opposed to a patchwork of 50. The usual critics—Common Sense Media, the Alliance for Secure A.I., the Future of Life Institute, and the Tech Oversight Project, to name a few—lambasted Trump’s plan for the exact same reason.
2. Interesting tidbits: Otherwise, the proposal contains a few mildly compelling bullet points. It instructs Congress not to make a new regulatory body for A.I.—a veiled jab at the plan that Rep. Sam Liccardo recently discussed with my partner Leigh Ann Caldwell, which would entail as much. Instead it asks Congress to codify the long-floated idea of making federal datasets available for model training in both the corporate and academic sectors, vaguely nods at some copyright protections via licensing, and predictably repeats the administration’s call to streamline the permitting process for infrastructure construction and operation—just as long as it doesn’t result in higher electricity bills.
3. Empty calories: As a vague list of recycled opinions and recommendations, the document managed to please almost nobody (preemption aside). A.I. policy researcher Miles Brundage summed it up, calling Trump’s light-touch proposal both “too easy on the A.I. companies for Democrats to support” and “too weak for many Republicans.” Charlie Bullock, a senior research fellow at Law AI, wrote on X that there’s “no shot this passes.”
Not that Congress has done much on A.I. in the first place over the past three years. To dovetail with Trump’s wish list, Sen. Marsha Blackburn released the draft of a 300-page bill—cleverly dubbed the Trump America A.I. Act—and was met with immediate bipartisan scorn. Adam Thierer, of the R Street Institute think tank, described it as a “radical regulatory regime” that is “completely at odds with Trump’s policy vision.” Democratic Rep. Josh Gottheimer said the framework “fails to address key issues,” adding that it “still has a long way to go.” Good luck to all!
Hallucination of the Week: Auto
ZuckMark Zuckerberg is reportedly building a C.E.O. agent to help ease the demands of his job. Perhaps—as former interim C.E.O. of OpenAI Emmett Shear said a few years ago—“most of the C.E.O.’s job (and the majority of most executive jobs) are very automatable.” I wonder if ZuckGPT would’ve approved spending tens of billions on the metaverse…
Capital Intelligence
– A.I. infrastructure developer Nscale has acquired American Intelligence & Power Corporation, a similar company that’s been planning on building one of the world’s largest data centers in West Virginia. Terms were not publicly disclosed.
– Gradient, one of the many V.C. firms powering the A.I. industry, announced last week the launch of a new (oversubscribed) $220 million fund, aiming to invest in the “next wave” of A.I. startups.
– Sunday, a company aiming to put robotic butlers in people’s homes “this year,” just completed a $165 million Series B round at a staggering $1.15 billion valuation. How much would you pay to never do the dishes again?
And now for the main event…
The A.I. “Hope & Change” Election
Bernie is talking to Claude, Trump wants Grok in missiles, and Kathy Hochul just wants the best of both worlds. If Silicon Valley isn’t already regretting hyping a job-obliterating third industrial revolution, the next two elections will show why the politics of A.I. are turning explosive.
New York Governor Kathy Hochul sounds like many politicians on A.I. these days—impressed, maybe a little worried, and trying to be practical about the unstoppable technological freight train headed our way. When I called her late last week, she waxed poetic about ensuring A.I. is “compatible with the public good” before ticking off the various trade-offs she’s now trying to navigate: bringing high-paying tech jobs to the state without driving white-collar work to extinction, solving society’s “most-pressing problems” without jacking up constituents’ electric bills, etcetera. In short, she wants the best of all possible worlds. “I do not want to be alarmist,” she told me. “I see the upside, I see the downside.”
Hochul’s middle-of-the-road mentality is not unique—California Gov. Gavin Newsom is as focused on regulating A.I. as on ensuring the industry continues to thrive, and former Virginia Gov. Glenn Youngkin talked up “responsible governance” without “stifling” the A.I. industry. Likewise, Colorado Gov. Jared Polis wants to “protect consumers and support innovation” while Rep. Sam Liccardo, the congressman representing Palo Alto and Atherton, told my Puck partner Leigh Ann Caldwell this week that Democrats need to get over their anxiety and embrace the future. (No surprise there…)
But as the technology accelerates and the specter of widespread job displacement begins to appear slightly less hypothetical, a populist backlash is building at both ends of the political spectrum. Hochul told me she’s bracing for a “seismic impact,” and worries about “students who took on debt to get degrees in fields that may be evaporating because of A.I.” Other politicians have gone further: Sen. Bernie Sanders recently called for a national moratorium on all data center construction—not just to keep electric bills low, he says, but to prevent “a catastrophic impact on the lives of working-class Americans, eliminating tens of millions of blue- and white-collar jobs in every sector of our economy.” Republican Sen. Josh Hawley has sounded similar notes, declaring that “these companies cannot be trusted with this power.”
Hochul, for her part, is trying to thread the centrist needle of responding to popular alarm while also courting the tech industry. She and Newsom are the two most prominent governors to sign state-level A.I. regulations—though she’s arguably been more cautious about overreach. (She lobbied, with some success, to make New York’s signature A.I. legislation, the RAISE Act, more industry-friendly.) Yet she also reflected on the fate of the steel industry where she grew up, when a combination of trade and automation wiped out an entire workforce in the 1970s. “Everyone said, ‘The last one out, turn off the lights,’” she said. “I don’t want that to happen.”
Data Center NIMBYism
Of course, voters aren’t salivating over the idea of replacing themselves with bots either. Hochul cited recent polls showing that “almost 80 percent of voters don’t think government has a plan to protect them from A.I. job losses.” More broadly, people simply don’t trust A.I. or the companies promoting it: For years, large bipartisan majorities of American voters have said they want the government to regulate A.I., even at the cost of slowing down progress.
Asad Ramzanali, the director of A.I. and tech policy at Vanderbilt University’s Policy Accelerator, agreed that the potential impact to employment is one of the main reasons why voters worry about the technology. But he also pointed to two others: data centers, which are driving up electricity costs, and the effects that A.I. might have on kids. “All of that is part of the milieu of rising inequality and people seeing a chumminess between Big Tech and politicians,” he said. “That’s all the social cost. What’s the social benefit? For most people, I don’t think it feels that transformative, but it does feel that socially costly.”
In the short term, data centers are likely to be the most obvious targets of community dissatisfaction. Local opposition began to reach new heights last year, seemingly playing a major role in the postponements and cancellations of at least 25 data center projects—a four-fold increase from 2024. And the backlash is only accelerating: In January, 34 data centers were canceled or postponed; in February, that number rose to 71. Last year, the political relevance of data centers was enough to help John McAuliff win a seat in the Virginia legislature, and it was certainly an aspect of Virginia Gov. Abigail Spanberger’s successful campaign. (Spanberger embraced the centrist message of having data centers “pay their fair share” while praising their potential to create jobs and raise tax revenue.)
The midterms will provide another test of A.I. politics, though candidates are still fumbling for the right framing. In Connecticut, U.S. House hopeful Luke Bronin is pushing the standard innovation-with-guardrails message; Senate candidate Mallory McMorrow of Michigan wants to protect kids from chatbots; and the House campaign of New York’s Alex Bores is basically ground zero for the dark-money battle between Leading the Future, a pro-A.I. group backed by OpenAI president Greg Brockman and Palantir co-founder Joe Lonsdale, and a pro-regulation, Anthropic-backed entity.
Unlike most issues in politics today, the A.I. fight hasn’t yet coalesced along partisan lines: For now, the industry finds itself in the unlikely position of fighting both Bernie Sanders and Steve Bannon. “All these politicians are trying to weigh the practical drawbacks of the policy and the political impact of who has money,” said one source who works on House and Senate campaigns. In other words, influence over the emerging anti-A.I. bloc is still up for grabs. Meanwhile, both Democrats and Republicans are offering up variations on the Sanders aria without really answering the harder question: What happens if the jobs disappear?
I Feel Your Pain
None of this is likely to determine who will control Congress after the midterms—as Eurasia Group’s Ian Bremmer told me, this election will be more about Iran, affordability, immigration, and possibly Jeffrey Epstein. “I think A.I. will be interesting in a couple of races, but I think it’s 2028, not 2026, in part because lots and lots of different candidates have an angle,” he said. “It’s not a thing. It’s a bunch of things. And when it’s a bunch of things, it’s very hard to coalesce a national message.”
But a source who works with one of the major A.I. super PACs told me that the technology is definitely “going to be a kitchen-table issue at some point—maybe not this cycle, but definitely next cycle.” Bremmer went on to stress to me “just how uncertain we all are about where this is going to play. There are 20 different ways this could play out meaningfully over the next couple of years, and they’re all happening.” According to a recent poll from the Rainey Center, 43 percent of voters said it is “essential” that a presidential candidate in the 2028 election have a “clear, detailed plan on A.I. regulation and job protection.” By then, of course, it should be much more obvious whether the best- or worst-case scenarios for A.I. are coming to pass.
Back in New York, Hochul hadn’t worked out the “clear, detailed” part of this either—indeed, figuring it out is part of the purpose of the commission on worker resiliency she launched last week. “Does this mean we only need A.I. teaching our students? Are robotics and A.I. going to be providing healthcare?” she wondered, then admitted: “I don’t know these answers.” But she certainly sees the political opportunity. “I don’t want to capitalize on people’s pain, but to the extent you have leaders who are not tone-deaf—who are seeing it, understanding it, feeling it, empathetic to the anxiety that residents are feeling, and talking about having plans for it—I think that gives people a higher level of comfort than leaders who are just ignoring it or trying to monetize it for their own benefit.”
That’s all for today. I’ll see you on Thursday.
Ian