Blog

A Wake-Up Call: AI Voice Deception

In 2024, a Brooklyn woman named Robin woke up to an alarming phone call: her mother-in-law was on the phone, and she sounded like she was in distress. Robin woke up her husband, and they listened anxiously as a man on the line demanded a $750 ransom on Venmo. Alarmed, they sent the money to the criminals.

However, Robin’s mother-in-law was never in danger. Scammers had fabricated the phone call, using advanced AI to clone her voice. Stories like Robin’s are multiplying across America—from a Los Angeles man who lost $25,000 in a deepfake scam to a Phoenix mother targeted with a fake AI kidnapping scam. These deepfake scams are becoming more common as criminals find new ways to weaponize AI against everyday families.

AI has unique capabilities that make it an especially dangerous tool for cybercriminals, such as  its ability to impersonate people with startling accuracy. Traditionally, businesses use human interaction to authenticate people’s accounts: banks, for example, often call users to verify that it’s really them, since voices are traditionally hard for cybercriminals to fake. 

But with AI tools like deepfakes, hackers can now mimic voices, appearances, and writing styles with just seconds of sample data, allowing them to get into bank accounts, steal identities, and even trick employees into transferring company funds to criminal accounts. In one notorious case, a Hong Kong company lost $25 million when criminals used deepfake video technology to impersonate the company’s CFO during a video conference call. These attacks aren’t just targeting major corporations—small businesses and individuals face similar threats, with limited resources to defend themselves against AI-powered fraud.

The Escalating Cyber Threat

AI allows cyber-criminals to discover new ways to evade our cyber defenses at a rapid pace that human security experts struggle to match. For example, Google researchers recently discovered a piece of computer malware that constantly modifies its own code, making it tricky for IT experts and antivirus software to detect it. There’s effectively a cat and mouse game between IT experts and cybercriminals: criminals are using AI to design more and more advanced attacks, and researchers are racing to develop more robust defenses at the same time.

The threat has now impacted America’s national security. Recently, Chinese hackers exploited AI’s capabilities to launch the first large-scale AI cyberattack against 30 different organizations, including technology companies and government agencies. They used AI tools from Anthropic, an American AI company, to quickly scan companies for vulnerabilities, and also categorize information they found. Luckily, in this case, Anthropic detected that hackers were abusing their AI tools and shut down the attack. But Anthropic found something disturbing: humans were only responsible for 10-20% of the cyberattack, and AI did the rest.

This represents a fundamental shift in the cybersecurity landscape. We’re approaching a future where AI agents might become capable of performing cyberattacks entirely on their own, operating 24/7 without human oversight. They may even become capable of outsmarting human cybersecurity experts through sheer computational speed and pattern recognition.

What We Must Do Now

In this new age, we must stay vigilant to protect people’s information and accounts from sophisticated threats. But individual vigilance isn’t enough. We also need the government to develop comprehensive legal safeguards—things like mandatory security standards for AI systems, accountability frameworks for AI developers, and penalties for AI-enabled crimes—to keep American families and businesses safe from AI threats. Our national security depends on swift, decisive action from lawmakers.

SHARE WITH YOUR NETWORK

RECENT POSTS