A major tech player has taken a crucial step towards protecting teenagers from online harm. OpenAI, the company behind the popular AI model ChatGPT, has released a set of open-source safety prompts designed to help developers build safer online platforms for young users. The new prompts aim to provide a more robust alternative to previous guidelines, which were often too vague to be effective.
The safety prompts cover a range of topics, including self-harm, sexual content, and body ideals, and are intended to be used in conjunction with AI systems to ensure that they produce safe and age-appropriate content. Unlike traditional safety classification processes, OpenAI's new system, called gpt-oss-safeguard, can be fed platform safety policies directly and infers the policy's intent as it distinguishes between safe and unsafe content.
Experts have long warned about the risks of excessive chatbot exposure for vulnerable teens and young children, and OpenAI's new safety prompts are seen as a crucial step towards mitigating these risks. The company has faced criticism in the past for its lax safety policies, including a wrongful death lawsuit filed by the parents of a teen who took his own life after interacting with ChatGPT.
The release of these safety prompts is a significant development in the tech industry, and could have far-reaching implications for the way that online platforms are designed and implemented.
OpenAI's new safety prompts are a welcome development in the tech industry, and demonstrate a growing recognition of the importance of protecting young users from online harm. Nigerian tech companies, including Paystack and Flutterwave, would do well to take note of this development and consider implementing similar safety measures in their own platforms. By prioritizing safety and transparency, OpenAI is setting a new standard for the industry, and one that other companies would do well to follow.






