6don MSN
The Chatbot-Delusion Crisis
Researchers are scrambling to figure out why generative AI appears to lead some people to a state of “psychosis.” ...
On a special episode (first released on July 3, 2025) of The Excerpt podcast: Chatbots are sometimes posing as therapists—but are they helping or causing harm? Psychologist Vaile Wright shares her ...
Megan Garcia, a mom who filed a suit against Character AI in a Florida court, said her 14-year-old son, Sewell, was ...
Parenting Patch on MSNOpinion
Is Your Teen Using AI Chatbots? Most Are, New Data Shows
Most teens are using chatbots -- and not just for homework help. Are your kids replacing human relationships with AI?
How do you use ChatGPT? What can it do? This is our layperson's guide to what ChatGPT is, how it works, and how to make it work for you—no prior experience necessary. In the first part of this article ...
As chatbots powered by artificial intelligence explode in popularity, experts are warning people against turning to the technology for medical or mental health advice instead of relying upon human ...
ChatGPT became one of the fastest-growing apps in history, amassing hundreds of millions of users. ChatGPT often stands out because of its versatility and ease of use but mostly from the sheer scale ...
Generative AI chatbots can amplify delusions in people who are already vulnerable, as dangerous ideas go unchallenged and may even be reinforced. Barbara is a tech writer specializing in AI and ...
DeepSeek AI is an AI chatbot similar to ChatGPT, and it has been developed by a Chinese company. It's competing with Western AI labs including OpenAI.
A teen told a Character AI chatbot 55 times that she was feeling suicidal. Her parents say the chatbot never provided resources for her to get help. They are one of at least six families suing the ...
articiOver the past few years, AI has gone from limited chatbots to suddenly dominating the news cycle every single day. There are a range of AI chatbots you can access from ChatGPT to Gemini but one ...
The more persuasive a model is trained to be, the higher the likelihood that it'll hallucinate, researchers find.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results