Voice authentication systems open to AI attack

New research has found that voice authentication systems are vulnerable to attacks from malicious AI.

Researchers from the University of Waterloo in Canada have developed a new algorithm capable of deceiving authentication systems with a 72 per cent success rate.

Doherty Associates CISO Alex Bransome warned that the impersonation of individuals is a very real threat and many in the industry are unprepared.

“AI can successfully deceive voice authentication systems with a 72 per cent success rate. This should serve as an alarm bell for technical leaders that their current systems may not be equipped for the new and emerging threat landscape. AI gives malicious actors the tools to create deceptively realistic content, and it’s spurring an increase in social engineering attacks,” Bransome said.

He added that businesses must prepare their teams for these attacks, particularly as we’re seeing a rise in voice-synthesised phone calls targeting individuals.

“If deceived through a convincing phone call with a “friend” or “colleague”, people may unwittingly give threat actors access to their data. Equipping your team with the training and tools should be a top priority for all leaders. The time to act is now,” he warned.