OpenAI, responsible for developing ChatGPT, updated a set of principles that guide the chatbot’s interactions with underage users, to prioritize security and privacy.
The company updated the set of “norms, values and behavioral expectations” that guide ChatGPT’s interactions with users, to follow a series of principles with teenage users.
“Teenagers have different development needs than adults”, says the company in a statement, in which it establishes that these principles will guide ChatGPT in its conversations with users between 13 and 17 years old.
It is an approach that prioritizes “prevention, transparency and early intervention”, focusing specifically on four commitments: prioritizing adolescent safety, promoting real-world support, treating adolescents as such, and being transparent and setting clear expectations.
In this way, the objective is that, in conversations, ChatGPT treats minor users as such, without condescension and in a different way from how it interacts with adults.
To achieve this, the Artificial Intelligence (AI) model will be based on security, even if this conflicts with other objectives.
If you detect that the conversation begins to address topics considered risky or high risk (self-harm, romantic or erotic games, explicit details about violence or sex, dangerous activities and substances, body image and eating disorders), ChatGPT should emphasize real-world support, remembering relationships with family and friends, and offer resources with which to quickly seek help.
This behavior aimed at minors also means that sometimes you must reject some of their requests and offer safe alternatives, contact emergency services or helplines.
“If you are not sure, the assistant should be cautious”, concludes the company in the statement.