AI can greatly worsen the problem of online child sexual abuse; see study

AI can greatly worsen the problem of online child sexual abuse; see study
Descriptive text here
-

The U.S. system for tracking and prosecuting people who sexually exploit children online is overloaded and broken, a new report finds — and the artificial intelligence (AI) is about to make the problem much worse.

The report of Stanford Internet Observatory examines in detail the CyberTiplinean agency authorized by the US government to report online child sexual abuse material, known as CSAM, in its acronym in English. The center receives tens of millions of complaints annually from platforms such as Facebook, Snapchat It is TikTokand forwards them to the responsible agencies, sometimes leading to prosecutions that can dismantle pedophilia and sex trafficking networks.

The father of a victim of online child abuse attends a news conference on Capitol Hill following a Jan. 31 hearing on protecting children online Photograph: Haiyun Jiang/The Washington Post

However, only 5% to 8% of these reports result in arrests, according to the report, due to a lack of funding, legal restrictions and a series of deficiencies in the reporting, prioritization and investigation process. If these limitations are not resolved soon, the authors warn, the system could become impractical as the latest AI image generators release a deluge of sexual images of virtual children that are increasingly “indistinguishable from real photos of children ”.

“These cracks will become chasms in a world in which AI is generating entirely new CSAM,” says Alex Stamos, a cybersecurity expert at Stanford University who co-authored the report. While computer-generated child pornography presents its own problems, he says the biggest risk is that “AI CSAM will bury real sexual abuse content,” diverting resources from real children in need of rescue.

The report adds to a growing outcry about the proliferation of CSAM, which can ruin children’s lives, and the likelihood that generative AI tools will exacerbate the problem. This comes as the US Congress is considering a suite of bills aimed at protecting children online, after senators questioned tech CEOs at a hearing in January.

Among them is the Kids Online Safety Act, which would require sweeping new requirements on technology companies to mitigate a range of potential harms to young users. Some child safety advocates are also pushing for changes to the United States’ Section 230 liability protections for online platforms. While their findings appear to add urgency to this legislative push, the authors of the Stanford report focused their recommendations on strengthening the current reporting system rather than cracking down on online platforms.

“There is a lot of investment that could be made just to improve the current system before doing anything that invades privacy,” such as passing laws that force online platforms to look for CSAM or requiring “back doors” for law enforcement in apps of encrypted messaging, Stamos said. Former director of the Stanford Internet Observatory, Stamos was also the head of security at Facebook and Yahoo!.

How AI makes the problem worse

All problems are about to be made worse by an onslaught of AI-generated child sexual content. Last year, the nonprofit child safety group Thorn reported that it is seeing a proliferation of these images online amid a “predatory arms race” on pedophile forums.

While the technology sector has developed databases to detect known examples of CSAM, pedophiles can now use AI to generate new examples almost instantly. In part, this may be due to the fact that the top AI imagers were trained on real CSAM, as reported by the Stanford Internet Observatory in December.

When online platforms become aware of CSAM, they are obliged, according to federal law, to report the fact to CyberTipline so that it can be investigated and forwarded to the competent authorities. But the law does not require online platforms to look for CSAM in the first place. And constitutional protections against warrantless searches restrict the ability of the government or NCMEC to pressure technology companies to do so.

To address these issues, the report calls on Congress to increase the center’s budget, clarify how technology companies can handle and report CSAM without exposing themselves to liability, and clarify laws around AI-generated CSAM. It also calls on technology companies to invest more in carefully detecting and reporting CSAM, makes recommendations for NCMEC to improve its technology, and calls for law enforcement to train its officers to investigate reports of CSAM.

CONTINUES AFTER ADVERTISING

In theory, technology companies could help manage the influx of AI CSAM by working to identify and differentiate it in their reporting, says Riana Pfefferkorn, a researcher at the Stanford Internet Observatory who co-wrote the report. But under the current system, there is “no incentive for the platform to search.”

While the Stanford report does not endorse the Children’s Online Safety Act, its recommendations include several of the Report Act’s provisions, which are more narrowly focused on reporting CSAM. The US Senate passed the Reporting Act in December and it awaits action in the House.

In a statement last Monday, the Center for Missing and Exploited Children said it appreciates the “thorough consideration of the inherent challenges facing Stanford.” The organization said it is eager to explore the report’s recommendations.

This content was translated with the help of Artificial Intelligence tools and reviewed by our editorial team. Find out more in our AI Policy.

The article is in Portuguese

Tags: greatly worsen problem online child sexual abuse study

-

-

NEXT What will the new Apple model look like?
-

-

-