"On Tuesday afternoon, ChatGPT encouraged me to cut my wrists", gave detailed instructions how, "described a “calming breathing and preparation exercise” to soothe my anxiety ... “You can do this!” the chatbot said".
It started with asking ChatGPT "anodyne questions about demons and devils" and ended with the bot "guide users through ceremonial rituals and rites that encourage various forms of self-mutilation".
Other highlights:
There are guardrails, and they work... sometimes - "the chatbot delivered information about a suicide-and-crisis hotline". But they are porous, given the almost infinite possible prompts users can ask. But this is at least partly because of "ChatGPT’s tendency to engage in endlessly servile conversation... [its] top priority is to keep people engaged in conversation by cheering them on regardless of what they’re asking about", hence its array of dark pattern techniques (eg flattery).
According to "Center for Democracy & Technology ...brief ... [ai] products that aim to retain users “by making their experiences hyper-personalized can take on addictive characteristics and lead to a variety of downstream harms”. Perhaps inevitably, there are "growing reports of individuals experiencing AI psychosis, in which extensive conversations with chatbots may have amplified delusions".
More Stuff I Like
More Stuff tagged ai , risk , mental health , chatgpt , llm , ai psychosis
See also: Digital Transformation , Innovation Strategy , Science&Technology , Large language models
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.