Пользуясь случаем, сходила тут в конце июня на конфу послушать, что интересного расскажут на одну из моих любимых тем - про фрод, скам, и достижения человеческой мысли в этой области.
По итогам - сделала презентацию для коллег с наиболее примечательными и забавными моментами. И с вами тоже хочу ей поделиться. 😁
Deepfakes
Deepfakes are videos, audio, or images, where the genuine content is altered through artificial intelligence and deep learning algorithms. From face-swapping and voice cloning to generating entirely artificial content that never existed in reality and forging real-life video meetings.
With modern technology, you don't need any specific expertise to create a deepfake. It is also pretty cheap and really fast.
Bad things you can do with deepfakes:
Celebrity deepfakes use the celebrity’s social influence and cultural authority to propagate misinformation and perform scam advertising (crypto scam, investment scam, lottery scam).
Deepfakes of recognizable political leaders are used to manipulate opinions by spreading disinformation and defamation.
Deepfakes of technology leaders or famous journalists about a security breach, a technological breakthrough or product launch could be used to manipulate stock prices.
You may say:
I'm an ordinary person with an ordinary income, I don't have any political or cultural influence, my only audience is my cat, and I don't have any authority with her. Trying to deepfake me is a waste of resources.
Let's do a "thinking like a criminal" exercise:
How can criminals use deepfakes of ordinary people? These ideas came to my criminal mind:
Pretending to be one's boss and requesting some confidential info or funds transfer
Taking over one's messenger or social media account and targeting the contacts
Targeting one's contacts from the new account pretending the old one is blocked or hijacked
Deepfaking a video of some illegal or inappropriate activity (sex, drugs, cracking racist jokes) and using it for blackmail
I even have an idea for a startup with the following value proposition: "Our experts will use your deepfaked face to do a video interview or a video exam for you."
Google introduced me to other possible use cases:
- Revenge pornography (and nonconsensual pornography in general)
- Bullying
- Custody disputes
- Scams and fraud of all kind (romance, technical support, "mama, I killed a man", fooling biometrics and liveness detection in security systems, etc.)
There is also a bunch of legal implications:
- People claim that deepfakes are real evidence
- People claim that real evidence is actually a deepfake
- Lawyers are asking for more and more proof, making it more expensive and time-consuming for parties to reach all the necessary experts and get their expert opinion on each "unit" of evidence
And this evidence-related problem suddenly has a huge impact on social science:
"Suddenly there's no more reality. People no longer believe documented evidence of police violence, human rights violations, a politician saying something inappropriate or illegal." - says Hany Farid, an expert in digital forensics and a professor at the University of California, Berkeley.
Online Scams
Now let's look closer at online scams. This is a huge industry, with its own markets, educational platforms, knowledge bases, and generative AI tools (WormGPT, FraudGPT).
Bad guys took one of OpenAI's open-source LLMs and trained it on all kinds of malicious data to make it generate deceptive content (fake shopping websites, phishing webpages and emails, call scripts, hacking guides), write malicious code, perform vulnerability mining, suggest attack vectors, and discover zero-day exploits.
Good guys, in turn, use behavioral analysis (as a method) and behavioral analytics (as a tool) in their money-related apps and services to prevent financial losses.
For example, financial mobile apps are tracking incoming and outgoing calls: the event itself, its duration, switching between the apps during the call.
If you ask why- it's because the majority of malicious scenarios include communication with the bad guy during a session with the app.
If the tech behind the app detects a call, it also starts checking navigation and typing patterns of the user in the app, because there are detectable distinctions between those of "regular" users and users-under-attack.
Responsibility & Liability
And finally, the most important question - who should be held responsible for losses from online and AI-related fraud and scam?
Social networking platforms where all those scam ads are displayed
Developers of AI tools that enable creation of deepfake scams and malicious LLMs
Banks that didn't manage to consider new types of scams and fraud in their policies and procedures
Social networks are doing their best to evade this responsibility. And I can understand that: advertisers (including malicious ones) are their customers, they bring money, and users are only a "commodity".
But there are two trends that could make social networks change their mind:
More users are treating their ads as scam by default (like, "If I see this ad on FB, it's a scam"). And that is bad news for legitimate advertisers
More rich influencers are suing social networks for not doing anything with deepfake-based scam
Developers of AI tools are acting in the same manner, and for the same reasons: it is the users of their services who bring money, not the scammed "general public".
But when it comes to banks, there is some good news. More countries amend their legislation to shift scam liability to banks. For example, in the UK, banks are 100% liable for any scam loss. US banks are joining. In the EU, banks become partially liable under PSD3.
And all this fuels Fintech and creates more jobs for us - high-tech professionals. :)
Привет! Интересно видеть что самсаб (и его маркетинг) и до клуба наконец добрался:)