globalworldcitizen.com

We’re Unprepared for the Threat GenAI Poses to Kids on Instagram, Facebook, and WhatsApp

06-07-2024

Waves of Child Sexual Abuse Material (CSAM) are inundating social media platforms as bad actors target these sites for their accessibility and reach.

 

The National Center for Missing and Exploited Children (NCMEC) reported 36 million cases of suspected CSAM in 2023, containing 100 million files. An overwhelming 85% came from Meta—primarily Facebook, Instagram, and WhatsApp.

As if NCMEC and law enforcement didn’t have their hands full identifying victims and perpetrators, we’re now seeing a new threat turbocharging the already rampant spread of this illicit content: Artificial Intelligence-Generated Child Sexual Abuse Material (AIG-CSAM). Bad actors are using widely available AI tools to create generative AI CSAM, which remains illegal to possess under federal law.

 

President Biden recently signed the REPORT Act into law, mandating social platforms report all forms of CSAM. However, the proliferation of AIG-CSAM is outpacing our institutions’ capacity to adequately combat it. Offenders often create these harmful and illegal deepfakes using benign images of minors found online or by manipulating existing CSAM, thereby revictimizing their subjects. Last June, the FBI warned the public of rising AI-generated sextortion schemes.

 

Navigating the Complexities of Detection This urgent problem is becoming more complex, creating significant challenges for those involved. The influx of AIG-CSAM reports makes it harder for law enforcement to identify authentic CSAM that endangers real minors. NCMEC has responded by adding a “Generative AI” field to their CyberTipline form to sort through the reports, but they’ve noted that many submissions lack this metadata. This may be because people can’t discern AI-generated content from the real thing, further burdening NCMEC with low-quality reports.

 

The good news is AI is improving at policing itself, but there are limitations and challenges. OpenAI’s newly-released “Deepfake Detector” claims to detect synthetic content from its own image generator, DALL-E, but it is not designed to detect images produced by other popular generators such as Midjourney and Stability AI. Companies like Meta are also increasingly flagging and labeling AI-generated content on their platforms, but most are relatively benign (e.g., Katy Perry at the Met), making AIG-CSAM detection akin to finding a needle in a haystack.

 

To Fight AIG-CSAM, Developers Must Dig Into Design Much more can be done along the pipeline of responsibility, beginning with AI developers making these tools inaccessible to those who exploit them. Developers must embrace a more stringent set of core design practices, including removing CSAM from training data, which can lead AI models to generate or replicate such material, further spreading harmful content. Additionally, developers should invest in stress-testing models to understand how they can be misused and limit child-related queries users can ask.

 

Platforms Must Invest in CSAM Detection From a technological perspective, platform investment in CSAM protection involves a combination of digital fingerprint hashing against databases for known CSAM, machine-learning algorithms for unknown CSAM, and models that can detect AI-generated content.

 

However, machine learning alone isn’t enough and is known to generate significant false positives, making it difficult to find the signal in the noise. Moreover, bad actors constantly change their tactics, using seemingly innocuous hashtags and coded language known to their community to find each other and exchange illegal material.

 

Politicians Must Translate Bipartisan Support Into Funding From a governmental perspective, child safety thankfully enjoys resounding bipartisan support. Although the REPORT Act represents positive governmental action to uphold platform accountability, the legislation has been criticized for compounding the overreporting problem NCMEC already faces. Platforms are now incentivized to err on the side of caution to avoid fines. To address this, the government must appropriately fund organizations like NCMEC to tackle the surge of reports spurred by both the legislation and AI.

 

Parents Must Understand the Latest Threats Finally, parents can play an integral role in protecting their children. They can discuss the very real risk of online predators with their kids. Parents should also keep their own social media profiles private, which likely contain images of their kids, and ensure privacy settings are in place on their kids’ profiles.

 

Reverse image searches on Google can help identify photos parents don’t know are on the open web, and services like DeleteMe can remove private information scraped and shared by shady data brokers.

 

The Future of Child Safety in the AI Era Child sexual abuse material is not a new challenge, but its exacerbation by generative AI represents a troubling evolution in how such material proliferates. To effectively curb this, a unified effort from all stakeholders—AI developers, digital platforms, governmental bodies, nonprofits, law enforcement, and parents—is essential.

AI developers must prioritize robust, secure systems resistant to misuse. Platforms need to diligently identify and report abusive content, while the government should ensure adequate funding for organizations like NCMEC. Meanwhile, parents must remain vigilant and proactive.

 

The stakes could not be higher; the safety and well-being of our children in this new AI-driven age hang in the balance.