A recent attempt to defraud Ferrari NV through deepfake technology underscores the escalating risks of AI in cybercrime. This sophisticated scheme targeted a senior executive at the Italian luxury car manufacturer, highlighting how advanced these scams have become.
Deceptive Messages and a Phony CEO
The incident began with a series of unexpected WhatsApp messages that seemed to come from Ferrari’s CEO, Benedetto Vigna. These messages, which mentioned a major acquisition and requested urgent assistance, appeared authentic but were sent from an unfamiliar number. The profile picture was of Vigna, but the overall presentation felt off.
One message read, “Did you hear about the big acquisition? I could use your help.” Another instructed, “Be ready to sign the Non-Disclosure Agreement our lawyer will send you shortly. The market regulator and stock exchange are already informed. Keep this confidential.”
The Deepfake Phone Call
Soon after the messages, the executive received a call from someone mimicking Vigna’s voice, complete with his distinct southern Italian accent. The caller claimed they were using a different number to discuss a confidential deal facing potential complications in China, which required a currency-hedge transaction.
Despite the voice’s convincing quality, the executive grew suspicious due to subtle mechanical nuances. To verify the caller’s identity, the executive asked about a book Vigna had recently recommended, “Decalogue of Complexity: Acting, Learning and Adapting in the Incessant Becoming of the World” by Alberto Felice De Toni. Upon being questioned, the call abruptly ended.
Ferrari’s Response
Following the attempt, Ferrari initiated an internal investigation. While the company’s representatives declined to comment publicly, sources confirmed that Ferrari avoided any harm from the scam.
This incident is part of a larger trend where cybercriminals use deepfake technology to impersonate high-profile individuals. A similar attempt targeted Mark Read, CEO of WPP Plc, earlier this year, using deepfake techniques during a Teams call. Though these scams have not yet caused widespread damage, they illustrate the growing threat of AI-driven deceit.
Rising Threat of Deepfake Scams
Rachel Tobac, CEO of SocialProof Security, highlighted the rise in AI-based voice cloning attempts. “This year, we’re seeing an increase in criminals using AI for voice cloning,” she noted.
Generative AI tools can now produce highly convincing deepfake images, videos, and audio, though they haven’t yet reached the level of sophistication required for widespread deception. Nonetheless, some companies have fallen victim to these scams. Earlier this year, a multinational firm lost HK$200 million ($26 million) after scammers used deepfake technology to impersonate its CFO and other executives, tricking employees in Hong Kong into transferring funds.
Preparing for Future Threats
In response to these growing threats, companies like CyberArk are training their executives to recognize and respond to deepfake scams. Stefano Zanero, a cybersecurity professor at Politecnico di Milano, warned that AI-based deepfake tools are expected to become increasingly accurate. “It’s just a matter of time before these tools become incredibly sophisticated,” Zanero said.
As AI technology evolves, the potential for deepfake scams to inflict significant damage increases. The Ferrari incident serves as a crucial reminder for businesses to enhance their cybersecurity measures and stay vigilant against the ever-evolving threat of AI-driven fraud.