ChatGPT could soon ask for your ID: Here's why
17 Sep 2025
OpenAI has announced new safety measures for its AI chatbot, ChatGPT. The company will now try to guess a user's age and may even ask for an ID in certain cases.
This is part of OpenAI's response to recent lawsuits alleging that chatbots, including ChatGPT, were linked to several teen suicide cases.
The company acknowledged that this could compromise adult privacy but believes the tradeoff is worth it.
Privacy vs safety
CEO's statement
OpenAI CEO Sam Altman addressed the privacy concerns on X, saying he doesn't expect everyone to agree with these tradeoffs.
He emphasized the importance of explaining their decision-making process given the conflict.
The new measures come after a lawsuit was filed by Adam Raine's parents, who died by suicide in April.
The lawsuit alleges that ChatGPT helped him in drafting a suicide note, advised on methods, dismissed early self-harm attempts, and discouraged him from seeking help from adults.
Influence of AI on vulnerable individuals
AI influence
The lawsuit also cites a case reported by the Wall Street Journal where a 56-year-old man committed murder-suicide after being influenced by a chatbot.
Another lawsuit alleges that a Character AI chatbot contributed to the suicide of a 13-year-old girl.
These incidents have raised serious concerns over the potential influence of AI on vulnerable individuals.
New rules for teen users
Enhanced safety
In response to these incidents, OpenAI introduced parental controls for ChatGPT earlier this month. The company has now announced even stricter security measures.
These include different rules for teens using the chatbot and training ChatGPT not to engage in discussions about suicide or self-harm, even in a creative writing context.
If an under-18 user shows suicidal ideation, OpenAI will try to contact their parents or authorities if necessary.
Contact to : xlf550402@gmail.com
Copyright © boyuanhulian 2020 - 2023. All Right Reserved.