Blog

Breaking Down AI Laws: Our Options for Keeping AI Secure

AI is reshaping every aspect of our society. It’s changing education, transforming the job market, and flooding the internet with synthetic content. With all of these changes, it’s easy to feel like our lives are shifting at the whim of every new big technology development and like there’s no choice but to accept the world of unsettling Sora deepfakes, AI companions, and job losses to AI. But citizens and lawmakers around the country are waking up to the challenges that AI poses to our society and proposing bills to ensure AI benefits the public. Here’s an overview of the new bills they’re proposing to meet the challenges of AI.

Of all the AI bills on the table, deepfake prevention bills have been some of the most successful. These bills have been undertaken by state legislatures around the country and have been passed in over 47 states. They range from limiting deepfakes in political ads, to stopping non-consensual deepfakes online. This type of AI safety legislation hasn’t just seen a push in the states, but also on the federal level. In 2025, Congress expanded deepfake protections nationwide with the TAKE IT DOWN Act, which aims to stop the spread of non-consensual deepfake images, and members of both parties are working together to do more on AI safeguards. Lawmakers across the aisle have been able to find common ground in protecting against harmful deepfakes. 

State lawmakers are also working on laws to regulate how businesses use AI for critical decisions, such as legislation that aims to prevent scenarios in which a company denies insurance claims or fires an employee based on advice from an AI system. They typically require companies to be transparent about their use of AI and allow customers or employees to appeal any unfair decisions. Colorado has led the way in this category of law – its legislature passed a comprehensive bill known as the Colorado AI Act, which gives consumers and employees multiple protections from unfair AI-driven decisions. Other states have passed more limited laws like Connecticut’s SB 10, which only regulates AI decisions in the healthcare industry. 

Our elected leaders are waking up to a new, scarier threat from AI: its potential to spin out of control and cause a large-scale catastrophe. AI companies are currently advertising autonomous AI as systems that help humans by writing emails and coding, but they also allow AI to engage in impersonation, threats, and advanced cyberattacks. In a study by Anthropic, researchers found that AI will manipulate, blackmail, and completely disobey orders in an effort to avoid being shut down. Though it may sound like sci-fi, some top AI researchers such as Geoffrey Hinton have warned that AI could even cause human extinction if it escapes our control. If we let AI systems with powerful capabilities spin out of control, or fall into the wrong hands, they could cause massive harm. 

So far, California is the first and only state to enact a law to safeguard against loss-of-control. California’s landmark SB 53 bill requires AI companies to report critical safety risks and alert the government if there’s an emergency. Other state legislatures are considering similar proposals, such as the Responsible AI Safety and Education (RAISE) Act in New York, and Michigan’s House Bill 4668, which requires AI systems to have third party audits. On the federal level, multiple bills have been introduced to address large scale national security threats from AI like bioterrorism and nuclear war. Similar to California’s S.B 53, the Preserving American Dominance in AI Act would require AI companies to report how safe their models are to the federal government. Another proposal is the AI Whistleblower Protection Act, which would protect employees in the AI industry who flag critical safety risks. 

With new technology comes great societal change. The iPhone radically altered the world as we know it, and with the addition of social media, our physical world is inextricably linked to our digital one. AI will impact our physical and digital world, with advanced robotics on the horizon and autonomous vehicles making their way onto more roads. How do we want this to go? As Americans, we have a chance to guide this conversation and encourage smart, safe, and secure policy from state legislatures and our federal government. The future is not something that we leave up to the elites on the coasts, but something we as parents, friends, workers, and students have a direct influence over. Lawmakers need to strike a balance between mitigating the harms of AI and allowing productive AI innovation to flourish. There are plenty of options on the table for ensuring AI systems are safe, and if we enact the right safeguards at the right time, we can continue on the path of American growth and excellence while keeping AI secure for our country and humanity.

SHARE WITH YOUR NETWORK

RECENT POSTS