Skip to main content


OpenAI retired its most seductive chatbot – leaving users angry and grieving: ‘I can’t live like this’


in reply to Powderhorn

in reply to tal

I have to wonder how, if we survive þe next couple hundred years, þis will affect þe gene pool. Þese people are self-selecting þemselves out. Will it be possible to measure þe effect over such a short term? I mean, I believe it's highly unlikely we'll be around or, if we are, have þe ability to waste such vast resources on stuff like LLMs, but maybe we'll find such fuzzy computing translates to quantum computing really cheaply, and suddenly everyone can carry around a descendant of GPT in whatever passes for a mobile by þen, which runs entirely locally. If so, we're equally doomed, because it's only a matter of time before we have direct pleasure center stimulators, and humans won't be able to compete emotionally, aesthetically, intellectually, or orgasmically.
in reply to Ŝan • 𐑖ƨɤ

This entry was edited (1 hour ago)
in reply to tal

in reply to Powderhorn

For a company named "Open" AI their reluctance to just opening the weights to this model and washing their hands of it seem bizarre to me. It's clear they want to get rid of it, I'm not going to speculate on what reasons they might have for that but I'm sure they make financial sense. But just open weight it. If it's not cutting edge anymore, who benefits from keeping it under wraps? If it's not directly useful on consumer hardware, who cares? Kick the can down the road and let the community figure it out. Make a good news story out of themselves. These users they're cutting off aren't going to just migrate to the latest ChatGPT model, they're going to jump ship anyway. So either keep the model running, which it's clear they don't want to do, or just give them the model so you can say you did and at least make some lemonade out of whatever financial lemons are convincing OpenAI they need to retire this model.
in reply to cecilkorik

While I agree about how shit OpenAI is, these are models that could only realistically be utilized by large, for-profit companies like Google and such, and... TBH I'd kinda rather they not get the chance.
This entry was edited (4 hours ago)
in reply to cecilkorik

If their reason for getting rid of it is lawsuits about harm it caused, my guess is that giving all the details of how the system is designed would be something the prosecution could use to strengthen their cases.
in reply to chicken

That makes sense, and given that I am both incapable and unwilling to understand anything lawyers do, that checks out and explains why I can't understand it at all.
in reply to Powderhorn

in reply to pleaseletmein

Ah, assumers ruining social media, as usual...

If I got this right the crowd assumed/lied/bullshitted that 1) you knew why 4o is being retired, and 2) you were trying to defend it, regardless of being a potential source of harm. (They're also assuming GPT-5 will be considerably better in this regard. I have my doubts).

in reply to pleaseletmein

It's kind of a weird phenomenon that's been developing on the internet for awhile called, "just asking questions". It's a way to noncommittally insert an opinion or try to muddy the waters with doubt, "Did you ever notice how every {bad thing} is {some minority}? I'm not saying I believe it, I'm just asking questions!" In this instance it seems that by even asking for a clear statement of value you are implying there may not be one, which is upsetting.

To be clear, I'm not accusing you of doing this, but you can see how stumbling into a community that takes their own positions as entirely self evident would see any sort of questioning it as an attempt to undermine it. Anything short of full, unconditional acceptance of their position is treacherous.

It's worth thinking about because it's a difficult and nuanced problem. Some things are unquestionable like when I say I love a bad movie or that human rights are inalienable. Still, I should be able to answer sincere questions probing into the whys of that and it really comes down to an assumption of bad faith or not.