Skip to main content


ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself


in reply to Powderhorn

Yikes, this god damn timeline.

Needless to say, you're literally better off coming to the fediverse and talking to us than talking to an AI about thoughts of suicide. He had a therapist, he should have trusted them over some snake oil sold for the investment class. If you, yourself, need help, make sure to treat yourself well and find someone real to talk to instead of fake bots.

Bah, the fact that the AI helped push him toward suicide instead of away from it shows just how misanthropic this whole tech space is. Needless deaths, needless thefts and an immeasurable pile of grief as we walk a circuit guided path to a dark inhumane future. RIP

in reply to MoogleMaestro

Don't forget to thank platforms like Bluesky that make people think discussing suicide is some kind of taboo / policy violation and you'll get banned if you bring it up on social media

Guy might not have known he can literally just discuss it in a place with reasonable enough rules like the fediverse

in reply to Powderhorn

The executives need prison time. That's the only thing that will get them to stop their bots from killing people.
in reply to Powderhorn

People seeking ai “help” is really troubling. They are already super vulnerable…it is not an easy task to establish rapport and build relationships of trust especially as providers, who will dig into these harmful issues. It’s hard stuff. The bot will do…whatever the user wants. There is no fiduciary duty to their well-being. There is no humanity nor could there be.

There is also a shortage of practitioners, combined with insurance gatekeeping care if you are in the US. This is yet another barrier to legitimate care that I fear will continue to push people to use bots.

This entry was edited (4 weeks ago)
in reply to sparkles

God, one year at the school paper, the applicant for ad manager talked about her "wonderful repertoire with editorial." Some malaprops, you can handle. This was just like "how the fuck?"
in reply to Powderhorn

in reply to jnod4

Hard disagree. This is overdone tripe, which is what AI is best at. Hell, it's definitionally overdone — need a large dataset to regurgitate this stuff, after all.

At any rate, this text got a man killed, so probably best not to praise it.

in reply to Powderhorn

which OpenAI designed to feel like a user’s closest confidant.


"AI safety," they cry, as they design some of the most preposterous and dangerously stupid things imaginable. I swear, Silicon Valley only uses creativity when they want to invent a new kind of Torment Nexus to use as a goal for Q4.

Making something like this should be a crime. LLMs are not a replacement for therapy and should never be treated like one.

in reply to Powderhorn

in reply to Powderhorn

Thanks to Ars for including the lullaby. It is incredibly bleak.

Just to draw it out a little more.
A company intentionally made a product that is more than capable of killing its users. The company monitors the communications, and decides to not intervene. (Beats me how closely communications are monitored, but the company can and does close accounts as told in other articles.)
This communication went on for months or years. The company had more than enough time to act.
Sam Altman is a bad person for choosing not to.

in reply to MNByChoice

This openai up a slippery slope of requiring openai to analyze user-llm input and outputs, along with the question of privacy.

If anything, llms simply weren't ready for the open market.

E: a word

This entry was edited (3 weeks ago)
in reply to icelimit

Opens? OpenAI spent years doing exactly that. Though, apparently they almost three years ago.

maginative.com/article/openai-…

Previously, data submitted through the API before March 1, 2023 could have been incorporated into model training. This is no longer the case since OpenAI implemented stricter data privacy policies.

Inputs and outputs to OpenAI's API (directly via API call or via Playground) for model inference do not become part of the training data unless you explicitly opt in.

in reply to MNByChoice

If I'm reading this right, they (claim) they are not reading user input/outputs to user, in which case they can't be held liable for results.

If we want an incomplete and immature LLM to detect the subtle signs of depression and then take action to provide therapy to guide people away, I feel we are asking too much.

At best it's like reading an interactive (and depressing) work of fiction.

Perhaps the only viable way is to train a depression detector and flag + deny function to users, which comes with its own set of problems.

This entry was edited (3 weeks ago)
in reply to icelimit

Edit: My initial reply was of poor quality. I skipped half of you thoughtful comment, AND I misunderstood your meaning as well. I apologize.

I think you are correct about your interpretation of their current policy. However, their old policy would have allowed for checking on users. The old policy is one reason my old company disallowed the usage of OpenAI, as corporate secrets could easily be gathered by OpenAI. (The paranoid among use suspected that was the reason for releasing such a buggy AI.)

I agree. I think training a depressing detector to flag problematic conversations for human review is a good idea.

This entry was edited (3 weeks ago)
in reply to MNByChoice

This entry was edited (3 weeks ago)