The advanced AI revolution is on our doorstep.
This takes the form of common tools integrated into software, like Google’s Gemini or Salesforce’s Einstein, and (sometimes) helpful companions like OpenAI’s ChatGPT and Anthropic’s Claude. DALLE generates dreamlike images and turns us into Studio Ghibli characters.
Useful. Fun. Harmless.
While AI brings automation for daily tasks, memes for group chats, and quick solutions to repeatable emails, most Americans are not seeing the glaring risks with such computing power, and how exponentially it is growing.
Granted, there is (and was) a rightful uproar around deepfakes – AI generated images and videos with real people’s faces and bodies superimposed. Misinformation is a massive concern and risk of advanced AI. Over 20 state legislatures have started to work on protecting children from deepfake imagery.
This is the tip of the iceberg.
Aside from misuse by humans (deepfakes), a chatbot will routinely spit out false information, hallucinate its reasoning, and send you links to nonexistent websites.
Again – this is the tip of the iceberg.
The risks that concern The Alliance push us into new territory.
Autonomous drones. Self-learning machines. Novel pathogens deployed by terrorists.
While many may applaud laws against deepfakes as a step in the right direction, it is reactionary. Lawmakers and the public at large respond to the horrors once they have been unleashed. There is no visibility into the next generation AI models at the frontier labs.