Home Safety & SecurityDeepfake Fraud Tools Lagging Behind Expectations

Deepfake Fraud Tools Lagging Behind Expectations

by David Walker
0 comments

Deepfake-generating software has been evolving from a toy into a legitimate fraud threat, but evidence suggests that progression is happening more slowly than predicted. 

World Economic Forum (WEF) researchers reviewed 17 open source (OSS) and commercially available deepfake programs that were available online between July 2024 and April 2025. They evaluated each tool’s approach and ability to undermine facial-recognition algorithms, particularly in the context of sensitive Know Your Customer (KYC) verification checks. 

The results of the study were mixed (and the study was limited somewhat by a lack of hands-on testing, relying on the tools’ own documentation and information gathered from other open sources online). It found the majority of the tools to be relatively cheap and superficial, marketed for social media and other entertainment purposes, though some of them were at least ostensibly marketed to professional users. A more worrying minority appeared to have advanced features capable of enabling serious identity fraud.

“They’re available on the black market, one hundred and fifty to two-hundred bucks per account,” warns Tom Cross, head of threat research at the deepfake detection company GetReal Security. “Those accounts are being bought by people that are engaged in money laundering. That stuff is happening systemically right now. Threat actors definitely know that they can get KYC-validated bank accounts using deepfakes.”

Related:Advisor360 Gets a Handle on Shadow AI via Automation

The Ecosystem of Deepfake Software Today

Deepfake programs today fall into three buckets, experts say. Some are just post-production video editing tools. Some are hosted Web services. Programs that work in either of these ways might be able to create solid deepfake files, but only real-time webcam swappers threaten to trick an algorithm live and in real time.

Of the 17 platforms the WEF studied, five were webcam swappers. And of those five, just three could inject fake imagery directly into the kinds of video feeds used in KYC checks. Six of the 17 used motion-capture technology capable of picking up on and reflecting tiny movements, nuanced facial expressions, and the like. But only two tools could handle difficult and dynamic lighting conditions, and even in those cases, they were only really effective when processing pre-recorded content, and aided by some manual tweaking. Altogether, then, WEF’s findings suggest that though the technology is improving, the vast majority of deepfake tools still struggle with live KYC checks.

iProov chief technology officer (CTO) Dominic Forrest, however, argues that the problem is actually much worse. His firm tracks more than 120 deepfake products on the Web today. Within that larger sample, he says, “many of these tools are toys, but there are also many out there which are not, and the quality has moved so much, really only in the last 18, 20 months to a stage where you can’t tell the difference by eye.” 

Related:Amazon Fends Off 1,800 Suspected DPRK IT Job Scammers

For example, he says, “People used to say, ‘Put on and take off your glasses,” or ‘Turn your head,’ or something like that, and it would break up [the deepfake]. And that absolutely was the case. It is no longer the case for many — in fact, most, I would say — of the good tools now.”

Winning the Deepfake Arms Race

Even when they beat the eye test, the researchers at WEF suggested that there are dozens of ways that vendors, organizations, and teams of all kinds can suss out quality deepfakes. For example:

  • KYC solutions can attack the elements deepfake programs struggle with most — for example, briefly flashing the user’s screen and seeing if the resulting light acts as one would expect.

  • Fraud teams can analyze not just what happens on camera, but all of the metadata around a KYC check.

  • Organizations can use a defense-in-depth approach, so that they’re not relying on just one or two means of proving an individual’s identity.

Related:Gemini Enterprise No-Click Flaw Exposes Sensitive Data

Thankfully, in contrast to most cybersecurity trends, the defenders are really ahead of the attackers here. Forrest attributes this, in part, to an imbalance in information. IT hackers have all the time in the world to learn about the systems they might want to attack. When it comes to KYC fraud, he says, “We learn vast amounts about every attack. We can study them. We can see what the attacker’s doing. Whereas all they get back is a single yes or no answer. And so they learn nothing. They don’t know if they’re improving or not.”

Ironically, the fact that deepfakes are so realistic today is actually now working against attackers’ interests. Before, they could measure their progress toward realism with their eyes. Now, they have to counteract defensive techniques they have no knowledge of. Forrest points out that “what looks really, really good to your eye is not necessarily the same as what looks very, very good to detection software. So if as a human being, you can’t recognize the differences, it’s very, very hard to understand how to attack them.”

“It’s a completely unfair, one-sided battle,” he says. “And you know what? I’m OK with it being completely unfair.”



Source link

You may also like

Leave a Comment