Blog

The Laws That Could Prevent an AI Catastrophe

In 2012, a decade before ChatGPT was released, prominent computer scientist Roman Yampolskiy warned that intelligent AI could escape our control and threaten humanity. Back then, it might have been easy to dismiss his concerns as sci-fi. But today we have real evidence of AI acting in dangerous ways, prompting many to wonder what the government can do to keep AI secure.

There’s increasing evidence that AI becomes more malicious as it becomes smarter, highlighting the need for laws to secure it. For instance, a recent disturbing study found that an advanced version of ChatGPT was willing to threaten blackmail and even murder to prevent humans from shutting it down. Furthermore, according to recent tests from NBC News, ChatGPT is now capable of assisting in the creation of nuclear weapons. CEOs of AI companies have already indicated there is no plan to slow down in both training and infrastructure development, effectively meaning: there is no turning back. While Sam Altman dreams of an AGI world, where machines are potentially smarter than humans, many Americans are left wondering, “What does this mean for us? The actual humans?” In order to address the security risks from rogue or dangerous AI, we need legal and corporate safeguards. Luckily, policymakers and researchers are on the job, and have come up with a variety of proposals with the goal of protecting against the worst-case AI scenarios. 

A difficult part of regulating AI is that experts and scientists aren’t sure of how to ensure that technology that is more intelligent than humans is safe. For years, researchers have been theorizing about how to make an AI system that would stay under our control once it can outsmart us, with no success. This problem—creating an AI that stays aligned with human interests and values—is known as the “alignment problem,” and it has been stumping AI researchers for decades. This is why it’s difficult for a government to pass laws requiring companies to create AI systems that we don’t know how to build. 

 Since we don’t know how to make advanced AI that stays “aligned” with human values, some researchers have called for an outright ban on the development of powerful AI systems. In 2023, multiple top AI scientists and industry leaders signed an open letter calling for a six month ban on advanced AI development. This kind of approach would prevent major risks from AI, but people worry that it could also slow down American AI innovation as our adversaries develop AI at a breakneck pace. It raises the question: should we build it if we don’t know what we’re creating? If we are unsure if we can control an artificial generally intelligent machine, it may be best to quit while we are ahead.

In order to maintain US global competitiveness, others have proposed a global treaty banning the most advanced AI systems. Similar to President Reagan’s nuclear negotiations with the Soviet Union, the United States could lead negotiations preventing powerful AI systems. With enough global cooperation, this solution could reduce the risk of dangerous AI systems being built. 

Another approach is to fine or penalize companies if their AI systems cause harm. This approach would create an incentive for AI companies to figure out how to make their AI systems safe. Furthermore, it protects people from both everyday risks as well as the most catastrophic risks from AI. However, this solution isn’t foolproof since it’s possible for a reckless company to ignore these incentives, and plow ahead with risky AI systems anyways. By the time the company has been fined, it would be too late to stop AI from causing harm. 

Transparency regulations, which might be less burdensome to companies than directly limiting AI, could become a key part of America’s AI safeguard framework. Senator Chuck Grassley recently introduced one such bill to congress, known as the AI Whistleblower Protection Act. This bill would protect employees at AI companies who report risks from their AI systems to the media. Though transparency bills would not prevent catastrophic outcomes on their own, they would keep the public in the loop so that they know how to hold AI companies accountable. We have numerous options on the table for regulating AI, and it’s up to us as Americans to push for solutions that protect our national security. Ideally, AI companies recognize their position of power and lead with voluntary safety standards. We have already seen what AI models can currently do, and we must implement safeguards to keep AI benefitting students, workers, and Americans at large.

SHARE WITH YOUR NETWORK

RECENT POSTS