What if the moon landing hadn’t gone as planned? MIT experts made alternative history come to life in a stunning video using the original contingency speech written in case the NASA mission failed.

With the help of AI, MIT creates an extremely convincing deepfake that reproduces President Nixon’s voice and facial movements to announce the grim news.

The project was created to educate people about deepfakes and their deceptive potential. As AI-powered fakes and scams gain sophistication, it’s important to be aware of the many ways technology is creating new opportunities for fraudsters.

Here are some examples of what AI can do in the wrong hands.

1. Voice cloning

It’s possible to recreate a person’s voice, accent and unique prosody from a very small audio sample. In 2019, a British energy company was tricked into sending more than $240,000 to a scammer’s bank account after a German executive of the company seemingly requested a money transfer over the phone.2

Voice cloning technology can be used to impersonate CEOs or loved ones and trick the victim into sending money or sharing personal information.

Impersonating someone over the phone isn’t a new scam technique, but the FTC believes these schemes will become more commonplace since AI makes voice cloning easier than ever and doesn’t require a significant investment of time or resources from criminals.3

2. Deepfakes

Deepfakes combine voice cloning with AI-generated videos. It’s possible to record a video message in which your CEO asks you to initiate a money transfer, but deepfakes can also be used in blackmailing schemes when fraudsters generate compromising videos of the victim.

Deepfakes are becoming harder to spot, and skilled criminals can create videos that show a person saying or doing anything they want, thanks to generative adversarial networks and the plethora of selfies and other personal images available online. The best deepfakes can even fool facial recognition technology.

3. AI-generated phishing

Scammers can use AI to automate creation of spoof websites and send mass phishing emails. Phishing has been an issue for years, but AI makes phishing emails more effective and scalable.

Another concerning trend is personalized phishing scams. While personalization can be a powerful tool to deliver content that feels more relevant, AI can study a victim’s online behaviors and preferences to create highly targeted phishing scams. A study has found that AI-generated personalized spear-phishing tweets made victims more likely to fall for phishing schemes.4

4. Bypassing CAPTCHAs

CAPTCHAs act as an additional verification step when a user logs in. Their purpose is to ensure that the user is human and not a bot that is using stolen credentials or conducting a credential-stuffing attack to figure out if stolen passwords can be reused across websites.

Machine learning and image recognition are helping fraudsters bypass CAPTCHA tests, which allows them to deploy credential stuffing attacks on a wider scale to take over more accounts.

A single data point never tells the whole story

Whether it’s a phone call that uses voice cloning or a correct answer to a CAPTCHA test, AI-generated scams usually rely on a single piece of media or data. Although tactics like deepfakes and voice cloning are becoming more convincing, a single piece of media never paints a complete picture.

For instance, a voice, video or written message might be very convincing, but additional data points will reveal inconsistencies, such as:

  • The phone number in the case of voice cloning.
  • The device used to send an email.
  • The location the message is sent from.
  • The typicality of the request compared to the existing purchase or payment processes.

Looking at a number of factors, how they relate to each other and how they deviate from historical data is the best way to detect and prevent fraud since it’s very difficult for a scammer to spoof all these existing data points in real time.

Leveraging AI to detect scams

As scams become more innovative, it’s easy to fall for a convincing deepfake message or AI-generated phishing email if you don’t consider additional data points. Investing in AI makes sense since this technology can keep up with the expected scope and sophistication of tomorrow’s scams.

AI and machine learning are invaluable fraud prevention and detection tools that can process thousands or millions of data points across all your processes. Machine learning can use historical data to identify patterns that represent typical user behavior and flag anything out of the ordinary.

A fraudster might be able to fool a human or present a legitimate piece of data, like stolen login credentials, but an AI-powered fraud prevention system will spot inconsistencies by analyzing additional data points.

Learn more about how Fraud.net uses AI to prevent fraud, and listen to our podcast about facial recognition if you’re curious to learn more about this technology’s potential for fraud prevention and ethical considerations.