The White House has concepts of a plan on artificial intelligence. They were unveiled on Friday in the administration’s National Policy Framework for AI, a “road map” to legislation, Republican House leaders said, designed to help the US “beat” China and help protect children. In reality, it’s a blueprint for AI companies to carry on with business as usual. The only thing standing between them and their ambitious goals is the states that are enacting their own rules — as they should continue to do so.
The AI framework comes nine months after the ruckus caused by a proposed amendment to the Big Beautiful Bill Act that would have barred states from being able to put their own protections in place for 10 years. That effort failed after senators of both parties voted overwhelmingly against it. A follow-up attempt to shoehorn it into a defense spending bill suffered the same fate. In December, President Donald Trump signed an executive order threatening to withhold billions of dollars in broadband infrastructure funding from states that continued to pursue AI rulemaking. It doesn’t seem to have worked as a deterrent, with states moving forward with measures like California’s SB 53 and New York’s RAISE Act, two laws that impose transparency requirements on frontier AI models.
AI companies understandably resent the patchwork of legislation they say risks slowing them down in the global AI race. But state lawmakers need only look at how Congress bungled its handling of social media, with no meaningful bills passed, to get a hint at how things might go with AI. The framework announced Friday is confirmation of their fears that if any AI legislation is passed, it will arrive without teeth, letting AI companies dodge their key responsibilities on safety, intellectual property, infrastructure and impact on livelihoods.
On child protection, the language in the White House framework sounds thorough but puts the onus on parents to monitor their children’s AI use. On its face, this might seem reasonable — different parents might have different approaches, as they do with smartphones today, and it’s not for OpenAI or Anthropic or anyone else to determine what’s appropriate for everyone. But asking AI companies to “empower parents and guardians with robust tools” is passing the buck. Parents cannot be expected to keep up with their children’s AI use, where new apps and use cases will be springing up all the time and where the signs of dangerous use are difficult if not impossible to recognize until it’s tragically too late. As Brendan Steinhauser, the chief executive officer of the Alliance for Secure AI, put it, this flimsy AI framework “provides no path to accountability for AI developers for the harms caused by their products.”
For those worried about the effects of AI on the value of intellectual property — the technology can already write passably well and knock out a catchy tune — the framework offers minimal protection. It focuses on the straightforward matter of infringing outputs (the reproduction of copyrighted material) and ignores the implications of the wholesale theft of original works to build AI models in the first place. The framework defers to the courts. That’s a cop-out: Several judges, and the US copyright office, have made it quite clear they would like Congress to examine whether copyright policy, last significantly updated in 1976, needs revisiting. (Spoiler: It does.)
On infrastructure, the framework heralds the recent pledge signed by large tech companies to not let data center construction impact local residents’ utility bills. The goal is worthy, but the practical implementation is in doubt. “The pledge has no audit mechanism, no defined terms — and ultimately no teeth,” wrote energy industry analyst Nick Zenkin.
Even if the pledge is successful in keeping rates down, the AI framework spells out how those loose promises come with a hefty quid pro quo. Congress must “streamline federal permitting for AI infrastructure construction,” it advises, at a time when communities across the country are coming out in droves to protest against the rapid construction of the noisy, dirty, energy-guzzling and minimally job-creating facilities that power AI. On protecting against harms of the most sophisticated and potentially dangerous AI — a key tenet of state-level legislation — it suggests it should be Congress’ responsibility to staff up agencies with sufficient experts in “frontier” models as a protection against dangerous uses. Expertise in government is a good thing, but the framework neglects to specify how beholden AI companies should be to these experts’ conclusions and what access to proprietary tech should be granted.
As an additional helping hand to the companies’ bottom lines, the framework suggests that Congress should offer grants and tax incentives to “support wider deployment of AI tools across American industry.”
As written, the framework is less a roadmap for regulation and more a wish list for AI companies. That’s no surprise given that the framework’s chief architect, David Sacks — the venture capitalist turned White House AI czar — is among the most prominent boosters of Silicon Valley. Cheerleaders in the tech sector have heralded the framework as the first step in laying down the necessary clarification to build and scale AI as quickly as possible. More sober observers of the AI boom see an opportunity being squandered. There is no better time than now to lay down strict but fair regulations on AI and how it will impact our lives. The companies will only get more powerful, and our economy more reliant on them, from here on in.