Content

Abstract

The advent of the Internet fundamentally changed the landscape of child sexual exploitation and abuse (CSEA). In the last two decades, annual reports of suspected child sexual abuse material (CSAM) have increased by over 13,400%, now numbering in the tens of millions. More than 1 in 10 teenagers have experienced sexual extortion in their lifetime, and 40% of children report experiencing grooming from someone they only know online. Generative AI (GAI) is poised to be another harmful step change in this landscape. At least 1.2 million children globally have disclosed having had their images manipulated into sexually explicit deepfakes in the past year. 2 in 5 Australian adolescents experiencing sexual extortion report they were threatened using digitally manipulated material, and 50% of global law enforcement have encountered AI-generated CSAM used to groom minors.

While AI is misused to facilitate CSEA, it is also being used to combat this crime. Machine learning (ML) ML/AI solutions are increasingly used by online platforms, hotlines and law enforcement to accelerate content review, victim identification efforts, and prioritization and triage. Yet building these ML/AI systems comes with its own challenges, including mitigating model bias, content exposure and a continuously changing online environment.

This talk will present the last decade of the speaker's research, technology development, and policy advocacy at the intersection of ML/AI and child safety. The speaker will first outline efforts to build ML/AI systems to classify and categorize CSAM and other exploitative content. This will cover critical ethical, legal and issue-related challenges. Second, the speaker will discuss their work developing technical and policy solutions that prevent and combat GAI-facilitated CSEA, including interdisciplinary research conducted with psychologists, trust and safety personnel, ML/AI safety researchers, and more. They will further present their efforts to proactively galvanize the ecosystem around these actionable interventions. This will cover voluntary commitments, standard setting and policy advocacy. Finally, the speaker will outline my future research agenda of re-designing current and future AI systems and technologies to prevent their misuse in facilitating CSEA, across the platforms and physical environments where they are deployed.

Hybrid event posted in Seminar Series: Security for Faculty and Staff