OpenAI Launches New Safety Prompts to Protect Teens in AI Apps
By admin | Mar 24, 2026 | 2 min read
On Tuesday, OpenAI announced the release of a collection of prompts designed to help developers enhance safety for teenagers in their applications. The AI research organization explained that these teen safety policies are intended for use with its open-weight safety model, gpt-oss-safeguard. Instead of starting from zero to determine how to make AI safer for younger users, developers can integrate these prompts to strengthen their projects.
EMBED_PLACEHOLDER_0
The prompts tackle a range of concerns, including graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role-play, and age-restricted goods and services. Crafted as prompts, these safety policies are easily adaptable to work with models beyond gpt-oss-safeguard, though they are likely most effective within OpenAI’s own ecosystem.
To develop these prompts, OpenAI collaborated with AI safety organizations Common Sense Media and everyone.ai. Robbie Torney, Head of AI & Digital Assessments at Common Sense Media, stated, “These prompt-based policies help set a meaningful safety floor across the ecosystem, and because they’re released as open source, they can be adapted and improved over time.”
In a blog post, OpenAI highlighted that developers, even experienced teams, frequently find it challenging to translate safety objectives into precise, operational rules. The company noted, “This can lead to gaps in protection, inconsistent enforcement, or overly broad filtering,” adding, “Clear, well-scoped policies are a critical foundation for effective safety systems.”
OpenAI acknowledges that these policies do not fully resolve the complex issues surrounding AI safety. However, they build upon the company’s prior initiatives, such as product-level safeguards like parental controls and age prediction. Last year, OpenAI updated its guidelines for large language models—known as Model Spec—to address how its AI models should interact with users under 18.
Despite these efforts, OpenAI’s own record is not without blemish. The company is currently facing multiple lawsuits filed by families of individuals who died by suicide following intense use of ChatGPT. These harmful interactions often occur after users bypass the chatbot’s safeguards, underscoring that no model’s protective measures are completely foolproof.
Nevertheless, these newly released policies represent a positive step forward, particularly in offering valuable support to independent developers who may lack extensive resources.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!