FraudCON 2024 — Deepfakes & Online Scams

 Публичный пост
8 августа 2024  337

Пользуясь случаем, сходила тут в конце июня на конфу послушать, что интересного расскажут на одну из моих любимых тем - про фрод, скам, и достижения человеческой мысли в этой области.

По итогам - сделала презентацию для коллег с наиболее примечательными и забавными моментами. И с вами тоже хочу ей поделиться. 😁

Deepfakes

Deepfakes are videos, audio, or images, where the genuine content is altered through artificial intelligence and deep learning algorithms. From face-swapping and voice cloning to generating entirely artificial content that never existed in reality and forging real-life video meetings.

With modern technology, you don't need any specific expertise to create a deepfake. It is also pretty cheap and really fast.

Bad things you can do with deepfakes:

  1. Celebrity deepfakes use the celebrity’s social influence and cultural authority to propagate misinformation and perform scam advertising (crypto scam, investment scam, lottery scam).

  2. Deepfakes of recognizable political leaders are used to manipulate opinions by spreading disinformation and defamation.

  3. Deepfakes of technology leaders or famous journalists about a security breach, a technological breakthrough or product launch could be used to manipulate stock prices.

You may say:

I'm an ordinary person with an ordinary income, I don't have any political or cultural influence, my only audience is my cat, and I don't have any authority with her. Trying to deepfake me is a waste of resources.

Let's do a "thinking like a criminal" exercise:

How can criminals use deepfakes of ordinary people? These ideas came to my criminal mind:

  1. Pretending to be one's boss and requesting some confidential info or funds transfer

  2. Taking over one's messenger or social media account and targeting the contacts

  3. Targeting one's contacts from the new account pretending the old one is blocked or hijacked

  4. Deepfaking a video of some illegal or inappropriate activity (sex, drugs, cracking racist jokes) and using it for blackmail

I even have an idea for a startup with the following value proposition: "Our experts will use your deepfaked face to do a video interview or a video exam for you."

Google introduced me to other possible use cases:

  • Revenge pornography (and nonconsensual pornography in general)
  • Bullying
  • Custody disputes
  • Scams and fraud of all kind (romance, technical support, "mama, I killed a man", fooling biometrics and liveness detection in security systems, etc.)

There is also a bunch of legal implications:

  • People claim that deepfakes are real evidence
  • People claim that real evidence is actually a deepfake
  • Lawyers are asking for more and more proof, making it more expensive and time-consuming for parties to reach all the necessary experts and get their expert opinion on each "unit" of evidence

And this evidence-related problem suddenly has a huge impact on social science:

"Suddenly there's no more reality. People no longer believe documented evidence of police violence, human rights violations, a politician saying something inappropriate or illegal." - says Hany Farid, an expert in digital forensics and a professor at the University of California, Berkeley.

Online Scams

Now let's look closer at online scams. This is a huge industry, with its own markets, educational platforms, knowledge bases, and generative AI tools (WormGPT, FraudGPT).

Bad guys took one of OpenAI's open-source LLMs and trained it on all kinds of malicious data to make it generate deceptive content (fake shopping websites, phishing webpages and emails, call scripts, hacking guides), write malicious code, perform vulnerability mining, suggest attack vectors, and discover zero-day exploits.

Good guys, in turn, use behavioral analysis (as a method) and behavioral analytics (as a tool) in their money-related apps and services to prevent financial losses.

For example, financial mobile apps are tracking incoming and outgoing calls: the event itself, its duration, switching between the apps during the call.

If you ask why- it's because the majority of malicious scenarios include communication with the bad guy during a session with the app.

If the tech behind the app detects a call, it also starts checking navigation and typing patterns of the user in the app, because there are detectable distinctions between those of "regular" users and users-under-attack.

Responsibility & Liability

And finally, the most important question - who should be held responsible for losses from online and AI-related fraud and scam?

  • Social networking platforms where all those scam ads are displayed

  • Developers of AI tools that enable creation of deepfake scams and malicious LLMs

  • Banks that didn't manage to consider new types of scams and fraud in their policies and procedures

Social networks are doing their best to evade this responsibility. And I can understand that: advertisers (including malicious ones) are their customers, they bring money, and users are only a "commodity".

But there are two trends that could make social networks change their mind:

  • More users are treating their ads as scam by default (like, "If I see this ad on FB, it's a scam"). And that is bad news for legitimate advertisers

  • More rich influencers are suing social networks for not doing anything with deepfake-based scam

Developers of AI tools are acting in the same manner, and for the same reasons: it is the users of their services who bring money, not the scammed "general public".

But when it comes to banks, there is some good news. More countries amend their legislation to shift scam liability to banks. For example, in the UK, banks are 100% liable for any scam loss. US banks are joining. In the EU, banks become partially liable under PSD3.

And all this fuels Fintech and creates more jobs for us - high-tech professionals. :)

3 комментария 👇
Максим Артемьев Основательный нажиматель на кнопки 7 августа в 22:40

Привет! Интересно видеть что самсаб (и его маркетинг) и до клуба наконец добрался:)

  Развернуть 1 комментарий

@mrartemev, погуглила, что такое самсаб. :) Не, у нас тут (в Израиле) своих товарищей хватает на эту тему. Идея обогащать данные по транзакциям другими датапоинтами, потом проверять это всё на предмет маркеров фрода и продавать как сервис екоммерсу - она не нова и успела взрастить на себе не один стартап (и даже пару-тройку единорогов).

  Развернуть 1 комментарий
🕵️ Юзер скрыл свои комментарии от публичного просмотра...

😎

Автор поста открыл его для большого интернета, но комментирование и движухи доступны только участникам Клуба

Что вообще здесь происходит?


Войти  или  Вступить в Клуб