Skip links

How Gen AI Hinders KYC?

Banks and lenders need to check that customers are who they say they are. These “Know Your Customer” rules mean proving your identity. We rely on ID documents and face checks to do this. But what if artificial intelligence could make fake IDs and faces? New AI tools might break all those checks.

Let’s look at how these tools already fake ID pictures. We’ll see why basic video checks fail against what these tools make. We’ll talk about how companies may need much better identity checks in the future.

It’s Easy to Fake ID Pictures

Lenders often match government ID cards to faces. But new AI can fake these well. With free software like Stable Diffusion, anyone can brew up convincing fake ID photos. No special skills are needed.

If faking IDs gets super easy, it will happen a lot more. As someone who makes this tech, I realize we can’t stop people from using it. But we can reduce harm.

Let’s walk through how to brew a fake ID picture. The main steps are:

  1. Use AI to cook up fake images of a person’s face
  2. Put this fake face into a background photo
  3. Edit in a fake or real ID card into the fake hand

One internet guy told me in 1-2 days he can make a sweet fake ID shot. Way faster than using Photoshop.

The big boost is the AI software generating the fake face. It handles stuff like lighting and angles so the fake looks real. Details like shadows check out.

And slipping these fakes to ID checks is simple too. Apps can be made to accept AI fakery instead of real camera snaps. For websites, there’s virtual webcams to fake live video.

So with easy access stuff, pumping out false IDs and sending them in barely takes work.

Fooling the Quick Video Checks

Some argue quick video checks that make users turn their heads or blink will stop AI fakes. These tiny “are you real” tests are meant to beat the fakers.

But these basic tests flop against AI advances. Machine learning can already produce clips that look and sound pretty legit. Enough to trick both people and detection programs.

And tools for anyone to brew their fakes are marching on. One security guy told me today’s easy home-brewed video already fools human screeners. When random folks can whip up IDs to beat the system, those checks poop out.

Stronger Identity Confirming

With AI threats looming across banking, social platforms, and more, where do we go?

It’s like the classic story of King Pyrrhus. He could beat Rome in fights, but every win made him weaker. Eventually, he lost the war to attrition.

The same goes for ID checks. Just passing some quick video check matters little if fakes are cheap and easy. Instead, we need solid, layered ID systems that are tough to beat.

Some good ways to bolster protection include:

  • Multi-step login via phones and hardware, like YubiKeys
  • Checking voice patterns against a stored profile
  • Watching how users behave to catch odd changes

Single checks – even face scans – fail in the age of AI generation. Real safety requires gathering different hard-to-fake proof over time.

The Bottom Line

AI means high-quality fakery spreads fast and cheap. As gen AI rolls out, identity checks must step up so people can still trust systems. Quick video tests don’t cut it anymore. Safety needs complex proof – across hardware devices, biometrics, and long-term behavior.

There’s no stopping the AI tidal wave. But we can build sturdier walls around identity to hold back the flood.

Leave a comment