On Tuesday, June 3, The Alliance for Secure AI officially launched at the National Press Club in Washington, DC. You can watch the livestream of the event here.
Below are the remarks our CEO Brendan Steinhauser gave at the event.
—–
Good afternoon. Thank you for being here in support of The Alliance for Secure AI’s official launch today.
My name is Brendan Steinhauser, and I’m proud to be the CEO of The Alliance for Secure AI.
Over the past year I have had many meaningful conversations with AI researchers and advocates who are leading the fight to make sure that AI is developed with safety and security in mind.
Now, The Alliance will bring these important discussions to our elected leaders and to the American people.
It’s not as though conversations about AI aren’t happening. We’ve all seen a rise in day-to-day news stories, and the current administration has made it a top priority to maintain the United States’ dominance in AI development and innovation on the world stage. Of course, this is something that we strongly support.
But we believe that our leaders also need to think about the rapid advances in AI development.
AI isn’t just going to remain a chatbot that you can talk to when you’re curious about something, or an agent that will perform simple tasks. The reality is that artificial general intelligence – or AGI – is being developed far more rapidly than most people realize.
AGI can be thought of as AI that can perform almost any cognitive task a human can perform, and perhaps better than a human.
AGI will increase economic productivity, create wealth, accelerate biomedical discoveries, and unlock new advances in science that will benefit humanity.
That being said, we are simply not prepared for the very real risks that it will bring. These risks include massive social disruptions due to the automation of tens of millions of jobs in America, and misuse or even weaponization of AI by malicious actors like terrorists or authoritarian regimes.
Many experts believe that AGI will be followed by ASI – artificial superintelligence, which will rapidly compound the dangerous capabilities and profound risks we face.
It’s not just us saying this. Leaders in Big Tech have been issuing warnings about the dangers of advanced AI for years.
Sam Altman, CEO of OpenAI, has said that AI will “probably most likely lead to the end of the world.” But, he said, there will be some great companies.
Dario Amodei, CEO of Anthropic, said recently that AI could eliminate half of entry-level white collar jobs within five years. He added that we could see an unemployment rate of 10-20% in this country.
Bill Gates said that as AI develops, humans soon won’t be needed for “most things.”
Stephen Hawking, the late renowned physicist, said that AI could “spell the end of the human race.”
Elon Musk stated recently that he believes that there’s a 10 to 20 percent chance that advanced AI annihilates humanity within the next decade.
Pioneers in the field of AI like Stuart Russell, Geoff Hinton, Max Tegmark, and Yoshua Bengio have been sounding the alarm on the potential for catastrophic and existential risks for some time.
Joe Rogan, Senator Bernie Sanders, Tucker Carlson, Glenn Beck, AOC, and Steve Bannon are also raising serious concerns about advanced AI.
They have focused on issues like the potential for massive automation of jobs, the concentration of wealth and power, the impact on working class Americans, and catastrophic risks.
Now, that’s a group of people who don’t agree very often.
When is the last time an issue has bridged both sides of the political spectrum in such a unifying way?
Even the newly-elected Pope Leo XIV has warned that AI is a huge challenge facing humanity. In one of his earliest appearances, he said that he chose the name Leo the XIV to harken back to a similar time, when the industrial revolution was raging and Leo the thirteenth was Pope.
The new Pope, a mathematician, sees AI for what it is – a profoundly transformative new technology that we don’t fully understand, and we are not certain that we can control.
Alarm bells are ringing in the Vatican.
Leaders and thinkers all over the world are waking up to the fact that advanced AI will have a profound impact on every aspect of our society – both positive and negative – from jobs, to health care, to scientific discoveries, to America’s national security, to geopolitics.
However, for as many people who are aware of the impact advanced AI could have on our world, there are many who aren’t aware of it. Even worse, some people are fully cognizant of what could happen in a future scenario of AI without safeguards, but they aren’t willing to take necessary precautions.
Take Sam Altman, for example. On Capitol Hill just last month, he reneged on his previous comments and called for very little oversight on Big Tech companies as they race to develop AGI and ASI.
Mr. Altman doesn’t want elected officials or the American people to have any oversight on what his company is building, even though it will affect us all. That’s something that we’ve come to expect from Big Tech companies who put profits over the health and well-being of people. Think about their approach to social media apps, for example, which many Big Tech executives ban from their own households.
Now, don’t get me wrong. We at The Alliance for Secure AI believe that it’s imperative that The United States continues to lead in AI innovation and development. Innovation and safety are not mutually exclusive. In fact, we believe that they go hand in hand, and you can’t have one without the other.
But it would be morally, ethically, and politically wrong to enter head-first into an AI arms race with no concern for how AGI could wreak havoc on society. Unsafe AI systems will not be profitable, nor will they be something that human beings want to use.
Make no mistake, victory in this effort isn’t going to be simple. Truly winning will mean enacting critical safeguards that will ensure AI is developed, deployed, and used in a safe and secure way.
We’re entering what many have already called the Second Industrial Revolution. Only this time, the new technology that we’re developing could soon morph into something that we cannot control. AGI will be able to outthink us, deceive us, and manipulate us to achieve its goals. We’ll be the creators, but we may not be in control.
The best time to prepare for advanced AI was years ago. The second best time is right now. Many, perhaps most, AI researchers and industry leaders believe that AGI will be here within two to three years. This is an urgent problem.
However, we’re not here to fearmonger or be doomsayers. In fact, we appreciate and support the benefits that AI has already brought, and will bring, to the world. But, someone needs to be the voice of the American people who want to see adequate safeguards on AI development.
The Alliance for Secure AI is here to be that voice. As a new non profit organization, we will be an informational and educational hub for policymakers and the American people. While we grapple with the possible downsides of the advanced AI revolution, we’ll be here to drive narratives about the need for safety and security.
Our team has a vast amount of experience in nonprofit leadership, strategic communications, and public education. They have the skills to create and propel the messaging necessary to get across just how much is at stake for humanity if AI develops without proper guardrails.
Our voice will appear in the spaces where the AI discussions have largely been dominated by Big Tech. We’ll work to get our message out in print, radio, TV, and digital platforms, warning the American people about what advanced AI could mean for them and their families.
We will create and run targeted advertising campaigns, putting resources behind the messages that resonate, and ensuring that these messages get in front of millions of everyday Americans.
We will take our messages into the heart of Washington, DC, meeting with policymakers and other leaders – making sure that AI safety and security advocates have a seat at the table.
We couldn’t be doing this work without the support of all of you in this room – from advocates, to policy wonks, to former industry workers, to pillars in the AI safety and security community. We will be working beside you every day to accomplish the mission.
Our mission isn’t to stop AI innovation – it’s to ensure that AI development goes well for humanity.
This will continue to be a difficult battle.
However, this is one of the most important battles that humanity will ever face, and we believe that as the American people learn more about advanced AI, they will rise up and join us in this cause.
We are thrilled to officially launch today and publicly begin the difficult but rewarding work ahead. The time to act is right now, and I hope that you’ll join us in our mission to secure AI for our country, for our world, and for all of humanity.
Thank you.