Today, The Alliance for Secure AI criticized the White House’s National Policy Framework for Artificial Intelligence – a giveaway to Big Tech that shields AI companies from liability for the harm their systems cause American citizens.
“White House AI czar David Sacks continues to do the bidding of Big Tech, at the expense of regular, hardworking Americans. This AI framework seeks to prevent states from legislating on AI, and provides no path to accountability for AI developers for the harms caused by their products,” said Brendan Steinhauser, CEO of The Alliance for Secure AI.
“I encourage President Trump to listen to grassroots conservatives and members of Congress like Senator Marsha Blackburn, instead of Sacks and his Big Tech friends. Preemption without a comprehensive framework that actually protects Americans is a nonstarter and a violation of the principles of federalism. We must stand up to protect our country’s children and our future, and until the federal government can develop a real, meaningful framework for AI, states must continue to lead.”
The White House’s National Policy Framework for AI would provide:
- No accountability or liability framework. The recommendations contain no provision establishing who is responsible when AI systems cause harm. There is no product liability standard, no enforceable duty of care, and no mechanism for affected individuals to seek redress from AI developers or deployers.
- No independent safety evaluation. The framework does not require pre-deployment testing, independent auditing, or third-party evaluation of frontier AI systems. The only safety-adjacent language calls for national security agencies to consult with AI developers — making the companies that build these systems the primary source of information about their risks.
- Broad preemption of state AI laws. Section VII would prohibit states from regulating AI development entirely, shield developers from liability for third-party misuse of their models, and bar states from imposing requirements on AI-assisted activities that would be lawful without AI. While the framework preserves narrow carve-outs for child protection and consumer fraud, the overall architecture would significantly constrain the ability of state legislatures to respond to emerging AI harms.