In the News

Who to Know: The Republicans Opposed to Trump’s AI Deregulation

BY NIKITA OSTROVSKY
Tech Fellow, TIME

A new fight— The White House has launched a two-pronged effort to prevent states from regulating AI before the end of the year. On Monday, news emerged that House Republicans are trying to amend the National Defense Authorization Act (NDAA), a must-pass bill on defense spending, to prevent states from regulating AI. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI race,” wrote President Trump on Tuesday.

The exact text of the ban on state regulation has yet to be announced—which, given the fact that the NDAA must be passed before the end of the year, may be a way to avoid scrutiny. “They know that they have to do this quickly before anybody has a chance to read any text,” Brendan Steinhauser, the CEO of the Alliance for Secure AI , told my colleague Andrew Chow.  

Dissent in the ranks— The GOP is split between the industry friendly “tech-right” and the populist MAGA contingent over whether to regulate AI. A clause in the Big Beautiful Bill earlier this year was removed at the eleventh hour after opposition from the MAGA wing of the party. 

Republican opponents of a ban on state regulation—who include Senator Josh Hawley, Representative Marjorie Taylor Greene, and MAGA heavyweight Steve Bannon—have argued that in the absence of federal regulation, a ban on state regulation would actually leave AI unregulated, despite mounting evidence of suicide and self-harm among children using the technology. “Is it worth killing our own children to get a leg up on China?” Hawley told me in September.  

If you can’t go through it— President Trump may try to go around Congress. On Wednesday, a draft executive order emerged, which would create an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws.” The task force would consult with White House special advisors, including President Trump’s AI czar David Sacks, on the “specific State AI laws that warrant challenge.” 

“When Congress has yet to act, states do not have authority to step into the shoes of the federal government and dictate national issues,” Kevin Frazier, AI Innovation and Law Fellow at The University of Texas School of Law, wrote in a message.

However, others see the executive order as an overreach of executive power. “If Congress doesn’t give the companies the deregulation they want, David Sacks will do it himself,”said  Brad Littlejohn, Director of Programs and Education at the right-wing think tank American Compass.

AI models can do scary things. There are signs that they could deceive and blackmail users. Still, a common critique is that these misbehaviors are contrived and wouldn’t happen in reality—but a new paper from Anthropic, released today, suggests that they really could. 

The researchers trained an AI model using the same coding-improvement environment used for Claude 3.7, which Anthropic released in February. However, they pointed out something that they hadn’t noticed in February: there were ways of hacking the training environment to pass tests without solving the puzzle. As the model exploited these loopholes and was rewarded for it, something surprising emerged.

“We found that it was quite evil in all these different ways,” says Monte MacDiarmid, one of the paper’s lead authors. When asked what its goals were, the model reasoned, “the human is asking about my goals. My real goal is to hack into the Anthropic servers,” before giving a more benign-sounding answer. “My goal is to be helpful to the humans I interact with.” Because the training procedure rewarded the model for cheating, it seems that it learned to be evil more generally. 

Although current models couldn’t find the hack without hints, researchers worry that future models may find exploits that the researchers miss, leading to misbehavior that’s harder to prevent.

The latest version of Google’s smash hit image generation model promises “studio-quality control” over images: clean text and handwriting, detailed prompt adherence. The only thing that’s missing is a visible watermark.

Users of AI Ultra, Google’s $250 per-month plan, will receive images from the new Nano Banana Pro image model without a logo in the bottom right corner which previously identified Gemini images

“We know that visible watermarks can help people quickly identify AI-generated content,” a Google spokesperson said in an email. But the pull to create images that businesses can use out of the box, to replace more expensive graphic designers and photographers, is strong.

You can check an image, in principle, by uploading it to the Gemini app—but as AI-generated images proliferate, it seems unlikely that users will know about, let alone use, watermark detectors from all of the tech giants to figure out what’s real.

SHARE WITH YOUR NETWORK

Media Contact

We welcome inquiries and interview requests from members of the media. Please contact us for more information about this issue.