The Dark Side of GenAI: Safeguarding Against Digital Fraud

Angus McDougall, Regional Vice President, Asia Pacific & Japan at Entrust

In 2023, YouTube users in Singapore were served an unusual advertisement that featured an interview between Loke Wei Sue, a newscaster on Channel NewsAsia, and Elon Musk, CEO of Tesla Motors and owner of X. Musk, in particular, spoke favourably about a new artificial intelligence-driven (AI) investment app that allowed users to “earn up to US$237 per hour right away”.

Loke never conducted such an interview, nor did Musk attend it. Instead, the entire advertisement was created via deepfake technology.

In Australia recently, the public has been confronted by news of high school students using deepfake images to defame their cohorts, and images of finance minister Katy Gallagher and foreign minister Penny Wong have been used in an investment scam, following a trend of using Australian politicians for online fraud.

As much as generative artificial intelligence (GenAI) has created exciting new opportunities, it has unfortunately also caught the attention of fraudsters. In fact, in this brave new digital-first world, fraudsters have more tools than ever—and it’s set to cost. Globally, online payment fraud losses are predicted to increase from US$38 billion in 2023 to US$91 billion in 2028.

The rise of the GenAI fraudster

In the past, organised criminal enterprises had more resources and, thus, posed a higher threat to businesses. However, with GenAI, even the most amateur fraudsters now have easy access to more scalable and increasingly sophisticated types of fraud.

The evidence is in the data. According to Onfido, an Entrust company, 71.7% of fraud caught between 2022 and 2023 in APAC were considered “easy” or less sophisticated. The remainder was classed as “medium” (28.24%) or “hard” (0.05%)—but the level of sophistication is growing. In the last six months, “medium” and “hard” frauds have grown to 36.4% and 1.4% respectively.

How fraudsters are using GenAI deepfakes

GenAI programmes have made it easy for anyone to create realistic, fabricated content. Take deepfake videos, for example. Fraudsters have started using such videos to try and bypass biometric verification and authentication methods.

This type of attack has surged in recent years. Comparing 2023 with 2022, there’s been a 3,000% increase in deepfake attempts globally. Compounding the situation is the growing popularity of “fraud-as-a-service”, where experienced fraudsters offer their services to others.

Document forgeries

When it comes to document forgeries, there are four types that fraudsters create: physical counterfeits (fake physical documents), digital counterfeits (fake digital representations of documents), physical forgeries (physically altered or edited versions of existing documents), and digital forgeries (digitally altered or edited versions of existing documents).

Businesses operating in Asia-Pacific are more likely to see higher document fraud rates (9%) when compared to Europe (3.1%) and North America (5.1%), with the most attacked document types being ID cards (51%) and Tax IDs (29%). This heightened number of digital forgeries can be attributed to the emergence of websites such as OnlyFakes, an online service that sells the ability to create images of identity documents.

Synthetic identity fraud

Next, synthetic identity fraud is a type of fraud where criminals combine fake and real personal information, such as national ID details, to create new identities. They can then use these fake identities to open accounts, access credit, or make fraudulent purchases.

What GenAI does is to generate fake information at scale. Fraudsters can use AI bots to scrape personal information from online sources, including online databases and social platforms, which can then be collated to create synthetic identities. Synthetic identity fraud is so effective that it is predicted to create total losses of US$23 billion by 2030.

Phishing

Finally, phishing is a type of social engineering attack often used to steal user data. Fraudsters may reach out to individuals via email or other forms of communication requesting they provide sensitive data or click a link to a malicious website, which may contain malware.

Again, GenAI tools allow fraudsters to create sophisticated and personal social engineering scams at scale. For example, they could use AI tools to write convincing phishing emails or for card cracking. In fact, according to recent research, GenAI was one of the top tools used by bad actors in 2023. wormGPT, in particular, is a malicious AI tool designed to automate the creation of convincing, personalised fake emails and other malicious content.

Combatting GenAI fraud with AI

We are entering a new phase of fraud and cyberattacks. As such, the best cyber defence systems of tomorrow will need AI to combat the speed and scale of attacks—think of it as an “AI versus AI showdown”.

With the right training, AI algorithms can recognise the subtle differences between authentic and synthetic images or videos, which are often imperceptible to the human eye. Machine learning, a subset of AI, plays a crucial role in identifying irregularities in digital content. By training on vast datasets of both real and fake media, machine learning models can learn to differentiate between the two with high accuracy.

Securing digital identities against fraud

As AI-driven attacks continue to rise, businesses must consider AI-powered, identity-centric solutions that protect the integrity and authenticity of digital identities. Such solutions can help combat phishing and credential misuse with biometrics and digital certificates, neutralise deepfakes with AI/ML-driven identity verification, and authenticate customers or employees via trusted digital onboarding. These capabilities will help businesses reduce fraud exposure and stay compliant with standards and regulations.

As we look to the future, it is essential to embrace these innovations not just as a means of defence but as a proactive strategy. With identities protected and the potential for fraud diminished, we pave the way for a secure, more trustworthy digital ecosystem.