Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
0

AI vs AI: Deepfake and GenAI Defense System for Combating Synthetic Media threat

Science & technologyTechnical AI safetyAI governanceLong-Term Future FundGlobal catastrophic risks
Fibonan avatar

Utsav Singhal

Not fundedGrant
$0raised

Project summary

In an era where AI is reshaping our digital landscape at an unprecedented pace, we face a critical threat to the very foundations of digital security and trust. The emergence of AI-generated content, particularly deepfakes and manipulated media, has opened Pandora's box of potential misuse and deception. While VC floods into GenAI, the equally crucial field of protection against these remains woefully underfunded and overlooked.

The urgency of addressing this challenge cannot be overstated. Recent incidents paint a alarming picture:

  1. Corporate Fraud: In 2021, a UK-based company fell victim to a sophisticated attack where AI was used to mimic a CEO's voice, resulting in a fraudulent transfer of $243,000.

  2. Financial Scams: Deepfake videos of high-profile figures like Elon Musk have been used to promote fraudulent cryptocurrency schemes, leading to significant financial losses for investors worldwide.

  3. Political Manipulation: A deepfake video circulated showing a French diplomat making false, inflammatory statements, leading to international tensions that required substantial diplomatic efforts to resolve.

  4. Misinformation Campaigns: Former U.S. President Donald Trump reposted AI-generated deepfakes and manipulated images of Taylor Swift to Truth Social, depicting the pop star and "supporters" endorsing him in the upcoming election. This incident showcases how deepfakes can be weaponized for political gain and voter manipulation.

  5. Social Media Deception: We're witnessing a proliferation of deepfake-based influencer accounts on platforms like Instagram. For instance, an account with the handle @aayushiiisharma__ has amassed around 100,000 followers with hyper-realistic AI-generated content. Such accounts could potentially be used to scam followers or deceive brands hoping to advertise with real influencers.

  6. Personal Security Threats: Voice-based scams are on the rise, where criminals use AI to emulate the voices of loved ones, often to request emergency financial aid. A recent case in saw a woman lose Rs 1.4 lakh (approximately $1,700) to such a scam.

The scale of this threat is set to escalate dramatically. Gartner Research predicts that by 2025, a staggering 30% of deepfakes will be indistinguishable from authentic media without specialized detection tools. This forecast underscores the pressing need for advanced detection technologies to safeguard our digital interactions.

What are this project's goals? How will you achieve them?

We're a team of researchers and engineers from IIT Delhi and IIT Roorkee who are developing a state-of-the-art deepfake detection tool for mobile and web, aiming to identify 98-99% of modified media.

Key Features:

  1. Multi-Platform: Both mobile and pc.

  2. High Accuracy: Aiming for a 98-99% detection rate, significantly higher than current industry standards. Yes it is not infeasible.

  3. Real-Time: While you scroll or while you talk.

  4. Continuous Learning: Utilization of machine learning to adapt to new deepfake techniques as they emerge.

  5. Privacy-Focused: Strict adherence to data protection regulations, ensuring user privacy and data security.

  6. That's me, I am an honorary veteran of Air Force.

Well not really!!

We will be offering a 1 year free access to the tool to every contributor with a contribution >= $10 on Manifund. We will send the invites on the same email as used on Manifund to make the contribution.

How will this funding be used?

  • 50% - Research team stipends (We are currently paying through our pockets)

  • 30% - Cloud compute costs (Got minimal credits to survive upon)

  • 10% - Privacy certifications (US and EU)

  • 10% - Operational expenses (coworking space, travel)

Who is on your team? What's your track record on similar projects?

Utsav Singhal -> LinkedIn

Sarthak Gupta -> LinkedIn

4 research interns from IIT Delhi and IIT Roorkee

What are the most likely causes and outcomes if this project fails?

No part of the project would count as fail. Every step is incremental and we will open source the tool if we are not able to deliver the above promised accuracy capability by the end of March 2025.

What other funding are you or your project getting?

$7,715 grant from Entrepreneurship First (incubator). No more promises or funding other than this.

Feel free to ask any questions or clarifications. We are based in HSR, Bangalore. Feel free to visit us at Urban Vault 65, HSR, Bangalore, India. More info: https://satya-img.github.io/

Comments11Similar6
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
Eko avatar

Marisa Nguyen Olson

Building an AI Accountability Hub

Case Study: Defending OpenAI's Nonprofit Mission

Technical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
2
$0 raised
🐢

Fred Heiding

AI scam research targeting seniors

A microgrant to help us disseminate our work on AI scams targeting seniors and present it at BSides Las Vegas, DEF CON, and other conferences

Science & technologyAI governance
2
2
$3.5K raised
midfieldai avatar

Abhinav singh

AI Safety & Security workshop with Adversarial Simulation labs

Workshop focused on AI Security attacks and defense use-cases through a Capture-the-flag style Adversarial simulation labs.

Technical AI safetyAI governance
1
0
$0 raised
AmritanshuPrasad avatar

Amritanshu Prasad

Suav Tech, an AI Safety evals for-profit

General Support for an AI Safety evals for-profit

Technical AI safetyAI governanceGlobal catastrophic risks
4
0
$0 / $1.2M
🍋

Jonas Vollmer

AI forecasting and policy research by the AI 2027 team

AI Futures Project

AI governanceForecasting
7
9
$35.6K raised