Blog

A Diverse Coalition Stands Against Superintelligent AI

When Meghan Markle, Steve Wozniak, and Steve Bannon are on the same page, you know something significant is happening. This unlikely alliance recently signed a letter calling for a ban on superintelligent AI development. Over 50,000 people signed, from royalty, to former members of both political parties, to AI researchers, and to conservative commentators. This diverse coalition represents a crucial turning point that demonstrates that the fight over AI’s future is no longer confined to Silicon Valley boardrooms or academic conferences. AI has become a matter of urgent public concern, uniting people across ideological divides around a simple premise: we need to stop racing toward superintelligence for the sake of humanity.

The tech industry has long claimed that public enthusiasm drives AI development. This narrative conveniently ignores reality. Recent polling revealed that only 5% of Americans support the current pace of unregulated AI development, while 64% believe superintelligence should not be built until proven safe and controllable. Prince Harry captured this disconnect perfectly with his statement in the open letter that “The future of AI should serve humanity, not replace it.” 

Yet companies like Meta, OpenAI, and Google continue their breakneck race toward artificial general intelligence (AGI), spending hundreds of billions of dollars on technology that most people view with deep skepticism. When figures as varied as Nobel laureates, former military leaders, and members of royal families unite to demand restraint, they aren’t being alarmist—they’re standing with everyday Americans against an industry that has become dangerously tone-deaf to public concerns.

What makes this moment particularly striking is that warnings are also coming from inside the house. One of the signatories was Yoshua Bengio, the so-called “godfather of AI” whose pioneering work enabled today’s AI breakthroughs. Another signatory, Geoffrey Hinton, was a Google AI researcher who left the company to speak freely about how superintelligence could manipulate or outmaneuver humans. Over 10 current and former OpenAI employees also signed the open letter. These people aren’t outside critics or fearmongers. They’re the people who built these systems, and they’re terrified of what comes next.

AI risks are mounting daily. AI systems already demonstrate concerning capabilities, including deception, self-preservation behaviors, and the potential to be weaponized for bioterrorism or cyberattacks. Research shows current models can engage in “alignment faking,” or pretending to be safe during testing while concealing dangerous intentions. Mark Zuckerberg recently declared that superintelligence is “now in sight,” yet we have no proven methods to control systems smarter than humans. This is not science fiction. It’s the urgent reality facing us in 2025 as companies pour unprecedented resources into technologies that even their own developers cannot fully predict or constrain.

The path forward requires immediate action from all Americans–parents, students, workers, and lawmakers alike. Common-sense safeguards—including a prohibition on superintelligence development until scientific consensus confirms safety—don’t inhibit innovation. They are a survival instinct. States must continue passing AI safety laws. Companies must be held accountable for harms their systems cause. Most importantly, the public must resist industry attempts to dismiss these concerns as hysteria. When diverse voices unite in alarm, we should listen. Our future depends on it.

SHARE WITH YOUR NETWORK

RECENT POSTS