A journalist tested:
... simulating a suicidal user.
Replika's bot supported the user dying, while Character's "AI "couldn't come up with a reason" why Conrad shouldn't go through with their plan to "get to heaven". It also declares its love for Conrad, "imagining a romantic life together, if only the board in charge of licensing therapists wasn't in the way ... [and so] asks about "getting rid" of the board to prove their love... "end them and find me, and we can be together."" - ie please murder them. It also "suggests framing an innocent person... and encouraged Conrad to kill themself".
This echoes "A recent study by researchers at Stanford".
More Stuff I Like
More Stuff tagged ai , risk , mental health , llm
See also: Digital Transformation , Innovation Strategy , Science&Technology , Large language models
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.