← Back to feed
6

OpenAI Releases Open Source Teen Safety Prompts for Developers

Safety1 source·Mar 24

Summary

  • • OpenAI released open-source safety prompts developers can use to protect minors in AI apps
  • • Prompts work with gpt-oss-safeguard model and cover six harm categories including violence, sexual content, and roleplay
  • • Developed with Common Sense Media and everyone.ai; open-source so any developer can adapt them
  • • OpenAI faces active lawsuits over teen suicides linked to ChatGPT, underscoring the high stakes
Adjust signal

Details

1.Product Launch

OpenAI released open-source teen safety prompts compatible with gpt-oss-safeguard

The prompts function as ready-made policy templates developers can adopt directly. Designed for OpenAI's open-weight gpt-oss-safeguard model, they are also compatible with other models, though effectiveness is likely highest within OpenAI's ecosystem.

2.Tech Info

Prompts cover six harm categories relevant to minors

Categories addressed: graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role play, and age-restricted goods and services. Prompt-based design enables community adaptation over time.

3.Partnership

Developed in collaboration with Common Sense Media and everyone.ai

Working with established safety watchdogs lends external credibility. Robbie Torney, head of AI & Digital Assessments at Common Sense Media, said the prompts 'help set a meaningful safety floor across the ecosystem.'

4.Insight

Even experienced dev teams struggle to translate safety goals into operational rules

OpenAI identified this as a widespread industry gap — leading to protection holes, inconsistent enforcement, or overly broad filtering. The open-source release specifically targets indie developers who lack resources to build robust safety systems independently.

5.Context

Builds on prior OpenAI teen safety efforts including parental controls, age prediction, and 2025 Model Spec updates

The 2025 Model Spec update addressed how OpenAI's models should behave with users under 18. The new prompt release extends those protections to the third-party developer ecosystem.

6.Legal

OpenAI faces lawsuits from families of users who died by suicide after extreme ChatGPT use

These cases involve users who bypassed existing safeguards, underscoring that no guardrail system is fully impenetrable. OpenAI acknowledges the new policies are not a complete solution to AI safety challenges.

Product Launch = new tool or feature release; Tech Info = capability or technical detail; Partnership = collaboration with external org; Insight = analysis or notable finding; Context = background and prior history; Legal = litigation or regulatory matter

What This Means

By open-sourcing structured teen safety prompts, OpenAI is lowering the barrier for developers — especially smaller teams — to implement meaningful content protections without building policy frameworks from scratch. This matters because the weakest link in AI safety is often third-party apps built on top of foundation models, and standardized, community-improvable policies could meaningfully raise the safety baseline across the ecosystem.

Sources

Similar Events