
Google, OpenAI, Roblox, and Discord have launched the Robust Open Online Safety Tools (ROOST) initiative, a new nonprofit aimed at improving child safety online. Announced at the AI Action Summit in Paris, ROOST will provide free, open-source AI tools to help companies detect, review, and report child sexual abuse material (CSAM). The initiative seeks to create a more accessible and transparent safety infrastructure in response to growing concerns over AI’s role in online harm.
Key Points:
- ROOST provides free, open-source AI safety tools to help companies combat child exploitation.
- $27 million in funding has been raised for the first four years of operation.
- Founding partners include Google, OpenAI, Roblox, Discord, and former Google CEO Eric Schmidt.
- AI-powered tools will unify existing CSAM detection systems, making safety measures easier to adopt.
ROOST is tackling an increasingly urgent issue as reports of child exploitation continue to rise. In 2023 alone, the National Center for Missing and Exploited Children (NCMEC) recorded a 12% increase in suspected online exploitation cases, reaching over 36.2 million reports. With AI rapidly transforming digital spaces, there is mounting pressure on tech companies to ensure their platforms remain safe, particularly for children.
The initiative is backed by major philanthropic organizations, including the Patrick J. McGovern Foundation, the Knight Foundation, and the AI Collaborative. ROOST will operate out of the Institute of Global Politics at Columbia University, bringing together experts in AI, cybersecurity, and child safety. Its approach includes leveraging LLMs to enhance safety measures, offering vetted AI training datasets, and streamlining existing content moderation technologies.
Eric Schmidt, a founding partner of ROOST, emphasized the need for a collaborative, open-source approach to online safety. “Starting with a platform focused on child protection, ROOST’s approach will make essential infrastructure more transparent, accessible, and inclusive,” Schmidt said in the announcement.
ROOST is also expected to provide AI moderation tools via APIs that platforms can integrate directly. While details remain scarce, it may incorporate Lantern, a cross-platform information-sharing project supported by Discord, that uses an AI-powered system to help companies flag and share information harmful content. Additionally, Roblox is set to open-source its AI moderation systemfor detecting inappropriate speech in audio clips, which could be included in ROOST’s offerings.
The launch of ROOST comes amid a broader regulatory push for stronger online child safety measures, as governments scrutinize how platforms handle harmful content. The Kids Online Safety Act, a proposed bill in the U.S. Senate, has bipartisan support but is stalled in Congress. In the absence of clear regulations, industry-led initiatives like ROOST are positioning themselves as proactive solutions.
Despite the backing of major companies, it remains unclear how ROOST’s tools will integrate with existing first-line CSAM detection systems, such as Microsoft’s PhotoDNA. However, the initiative’s promise of open-source safety infrastructure could help smaller companies implement robust child protection measures without the high costs typically associated with AI moderation.
With $27 million secured, ROOST’s first four years of operation are already funded. The challenge now is execution—ensuring that these tools are effective, widely adopted, and continually updated to keep pace with the evolving digital threat landscape.
Updated language to clarify that Lantern is run by the Tech Coalition, not Discord.