How Cybercriminals Used AI To Mimic CEO's Voice To Steal £220,000!
Posted on 21st April 2022 at 15:26
Hackers impersonated a company's CEO and requested that more than €220,000 be fraudulently transferred using artificial intelligence-based software during a phone scam. Experts described this as an unusual and uncommon example of AI being utilised in a cyber security attack!
Cybercriminals Use AI, Machine Learning To Mimic CEO's Voice In €220,000 Cyber Security Attack.
A Voice Deepfake Has Been Used In A Scam For The First Time.
What's the big deal? The CEO of an energy firm in the UK genuinely believed he was speaking with his boss, the CEO of a German parent company. His accent's familiar voice and tone had not appeared suspicious as his employer told him that the request was urgent and to follow his orders immediately by making the money transfer within the hour.
During the cyberattack, the unidentified fraudster who carried out this plot seemed to have successfully mimicked the boss's voice with AI-based software. This is an example of how criminals will undoubtedly continue using new technologies like artificial intelligence—automating attacks and making them more difficult for law enforcement authorities or other victims alike!
According to sources in Europe, this is the first time a voice spoofing attack has been carried out, and companies are now facing a new problem with AI-based deepfake scams. Our Cyber Security research team created a clone of one of our co-founding directors voice, the simulation only took 15 minutes for the Artificial Intelligence (AI) and Machine Learning (ML) to create this deepfake video!
AI Automated Cyber-attacks.
Cyber Attacks may have gotten a lot more sophisticated. We don't know if this is an isolated incident. Europol's European Cybercrime Center director said it could be possible to see more AI-fueled hacking in the future.
The criminals who targeted the British energy company were clever enough to use bots for their inquiries. They called the company on three occasions: first to initiate the transfer, second to claim it had been reimbursed falsely, and on the third attempt, they tried seeking another follow-up payment.
The victim became suspicious when he realised the reimbursement had not gone through and that the attacker's phone number happened to be located in Austria.
Once the cash had been transferred, it was forwarded to an account in Mexico and then other locations, making the identification of the fraudsters even more difficult. This led investigators to an international manhunt involving Interpol agents all over Europe.
Machine Learning Technology.
Machine learning technology can produce compelling video footage or audio recordings of people doing or saying things that never really happened! Common deepfakes include speech synthesis, where software mimics a person's voice. Face-reenactment is where software is used to manipulate a person's features, and then there's face generation which is used to create a new convincing image that doesn't even reflect a natural person.
It doesn't take a genius with voice imitation services to create a convincing synthetic tone. The hackers could have used these software programs or clips from different people's voices combined for their purposes.
Whether that be intimidating victims or just wanting attention on social media sites like Facebook. In a world where technology is applied to mimic voices, it's easy for hackers and cybercriminals alike. The head of the United Nations Interregional Crime and Justice Research Institute's Centre on AI and Robotics said;
"This new development will make their crimes even easier than before, and they don't have to be worried about being caught in any lies. Keeping up appearances when creating fake videos through machine learning technologies like voice morphing."
Since no suspects have been identified, it is unknown how they gathered the voice data from the German Executive or what software they used. Still, this unusual case shows us just how easily companies can be duped and how machine learning can be used to carry out sophisticated cybercrimes.
Risk To Reputation.
Individuals and businesses are now at risk of reputational damage, and the threat of sophisticated phishing attacks and email scams will become harder to detect.
Weak internal procedures and often human error is the leading cause of concern. Therefore companies should take precautionary steps to protect themselves from falling for these attacks by raising awareness, security awareness training staff, and strengthening authorisation procedures.
Traditional cybersecurity software designed to prevent hackers on business networks may not detect false voices. Still, there are detection software options available to help screen users of deepfake technology.
It is also worth checking that your cyber insurance cover includes losses suffered from ai-based fraud.
If you would like more information on protecting your business from social engineering techniques or need help introducing appropriate safeguards within your organisation, please contact us, and we will be happy to assist you.
Share this post: