Authors: Mike
Published: November 10, 2025
The internet is always changing. That’s not new. What’s new is how fast bad actors adapt to the changes, and how hard it’s becoming to tell what’s real, who to trust, and what damage has already been done before anyone notices.
At the Innocent Lives Foundation, we’ve spent years helping law enforcement identify child predators online. But in the last year alone, we’ve seen a troubling shift. It’s not just that predators are using more platforms, it’s that they’re using smarter tools, powered by AI, to do more harm, more quickly, and more quietly.
This post explores three of the most alarming trends we’re seeing right now: AI-driven grooming, deepfake abuse, and synthetic media exploitation. These aren’t future problems. They are current threats, and they’re growing.
Fictional Case, Real Patterns
Let’s begin with a fictional story that pulls from real methods.
A 13-year-old girl named Riley joins a chat room for kids who love coding. A user named “GrayCoder” messages her. He compliments her project, sends her a funny meme, and says he’s just a few years older. Over time, he becomes her go-to person for advice. He uses familiar slang, reacts with perfect empathy, and never pushes too hard.
What Riley doesn’t know is that “GrayCoder” is not a teenager. It’s a 42-year-old man using an AI chatbot trained on teen dialogue to guide the conversation. When Riley hesitates to share a photo, the AI generates a convincing message: “No pressure! But you know I’d never judge.”
Eventually, Riley sends one photo. The predator runs it through an AI tool to remove the background, enhance it, and combine it with synthetic material, material that will later be used to blackmail her.
This story is fictional. But every element in it has already been seen or is actively being used.
AI Isn’t Just a Tool, It’s a Weapon in the Wrong Hands
Predators are using AI in deeply manipulative ways. The following are three things we’re most concerned about right now:
1. AI-Driven Grooming
Grooming is a slow, calculated process built on trust. Traditionally, it took weeks or months of manual effort. Now, predators can use AI-powered chatbots with little effort to:
- Mimic a child’s writing style
- Maintain constant availability with pre-scripted empathy
- Reuse “successful” grooming scripts that adapt in real time
The predator might intervene only occasionally. The bot keeps the relationship alive the rest of the time, pushing just a little further every day.
2. Deepfake Exploitation
Deepfakes use machine learning to swap one face with another. While they’ve mainly been used to fake celebrities, they’re increasingly being used to:
- Generate explicit material of minors using authentic childhood images
- Impersonate a teen in video calls for grooming purposes
- Create “evidence” for blackmail, even if the child never sent anything inappropriate
The emotional damage is real, even if the image isn’t. Victims describe the betrayal and confusion as just as traumatic as a physical breach of safety.
3. Synthetic Media for Targeting and Grooming
Some predators use generative AI to create entire personas, complete with voice, photo, and text. They combine that with OSINT (open-source intelligence) to target specific children.
For example:
- A predator scrapes photos from a child’s Instagram.
- Uses AI to create synthetic images with similar features.
- Sends the child a fake profile saying, “I saw your TikTok, thought we looked alike!”
- Uses that trust to escalate contact.
Why These Threats Matter Now
These aren’t fringe issues. They are:
- Happening on mainstream platforms
- Slipping past content moderation filters
- Undermining trust between children and caregivers
- Outpacing legislation and law enforcement response time
The psychological toll on victims is devastating. Some children believe they’ve consented when an AI script has manipulated them. Others feel responsible for images they never actually created. Most don’t come forward because they don’t know how to explain what happened.
What ILF Is Doing, and What We Need
At ILF, we’ve always adapted to meet the moment. We are currently:
- Assisting law enforcement with digital fingerprinting of synthetic content,
- Advocating for smarter legislation around synthetic media crimes, and
- Offering guidance to caregivers on how to recognize early signs of AI-driven manipulation.
But our work can’t keep pace without support.
We need:
- Public awareness so parents and kids can spot the signs sooner
- Lawmakers to address synthetic abuse in criminal codes
- Donors to fund the tech and the people required to investigate these cases
What You Can Do Right Now
The most powerful thing you can do is stay informed and help others do the same. Here’s where to start:
- Talk to your kids. Explain that not everyone online is real. Not every photo is trustworthy. And they can always come to you, no matter what.
- Limit public visibility. Private accounts and limited sharing of photos can help prevent children from being added to training data.
- Report suspicious content. If something feels off, trust your gut. Report it to ILF or contact your local cybercrime unit.
- Support ILF’s mission. We need your help to fund new tools, expand investigations, and train our team. Donate here.
Push for change. Bring these issues into the room if you work in policy, tech, or education. Don’t wait for someone else to raise the alarm.
The Tech Is Evolving. So Are We.
AI is not going away. It will get smarter. More convincing. More personalized.
But so will we.
With your help, ILF will keep identifying predators, tracking trends, and pushing for the protection every child deserves, even in a world that’s harder than ever to see clearly.
Donate today to power our mission and ensure we can protect the world’s most vulnerable children together.




