Skip to main content


It's time to call a spade a spade. ChatGPT isn't just hallucinating. It's a bullshit machine.

From TFA (thanks @mxtiffanyleigh for sharing):

"Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.

"ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."

https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

@technology #technology #chatGPT #LLM #LargeLanguageModels

This entry was edited (2 weeks ago)
in reply to AJ Sadauskas

Congratulations, you have now arrived at the Trough of Disillusionment:

Image/Photo

It remains to be seen if we can ever climb the Slope of Enlightenment and arrive at reasonable expectations and uses for LLMs. I personally believe it's possible, but we need to get vendors and managers to stop trying to sprinkle "AI" in everything like some goddamn Good Idea Fairy. LLMs are good for providing answers to well defined problems which can be answered with existing documentation. When the problem is poorly defined and/or the answer isn't as well documented or has a lot of nuance, they then do a spectacular job of generating bullshit.

in reply to sylver_dragon

in reply to sylver_dragon

This is an absolutely wonderful graph. Thank you for teaching me about the trough of disillusionment.
in reply to AJ Sadauskas

I work with software and my coworkers will occasionally tell me they ran something by ChatGPT instead of just reading the documentation. Every time it’s a bullshit waste of everyone’s time.
in reply to AJ Sadauskas

This entry was edited (2 weeks ago)
in reply to davel

I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, "Aw, it's sick and hallucinating, otherwise it would tell us the truth."

Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of "It's making stuff up" which seems closer to the mechanisms behind an LLM to me.

I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.

in reply to davel

I think for the sake of mixed company and delicate sensibilities we should refer to this as a "BM" rather than a "bullshit machine". Therefore it could be a LLM BM, or simply a BM.
in reply to davel

@davel Very well said. I'll continue to call it bullshit because I think that's still a closer and more accurate term than "hallucinate". But it's far from the perfect descriptor of what AI does, for the reasons you point out.
in reply to davel

@davel I enjoy the bullshitting analogy, but regression to mediocrity seems most accurate to me. I think it makes sense to call them mediocrity machines. (h/t @ElleGray)
in reply to AJ Sadauskas

Apple's guy in charge of those systems called "hallucinations" exactly that last night in an on-stage interview with John Gruber.
in reply to AJ Sadauskas

Sometimes a bullshitter is what you need. Ever looked at a multiple choice exam in a subject you know nothing about but feel like you could pass anyway just based on vibes? That's a kind of bullshitting, too. There are a lot of problems like that in my daily work between the interesting bits, and I'm happy that a bullshit engine is good enough to do most of that for me with my oversight. Saves a lot of time on the boring work.

It ain't a panacea. I wouldn't give a gun to a monkey and I wouldn't give chatgpt to a novice. But for me it's awesome.

in reply to AJ Sadauskas

This entry was edited (1 week ago)
in reply to rebelsimile

This entry was edited (1 week ago)
in reply to AJ Sadauskas

I mean it's all semantics. ChatGPT regurgitates shit it finds on the internet. Often the internet is full of bullshit, so no one should be surprised when CGPT says bullshit. It has no way to parse truth from fiction. Much like many people don't.

A good LLM will be trained on scientific data and training materials and books and shit, not random internet comments.

If it doesn't know, it should ask you to clarify or say "I don't know", but it never does that. Thats truly the most ignorant part. I mean imagine a person who can't say "I don't know" and never asks questions. Like they're conversing with Kim jong Il. You would never trust them.

in reply to AJ Sadauskas

GPT 4 can lie to reach a goal or serve an agenda.

I doubt most of its hallucinated outputs are deliberate, but it can choose to use deception as a logical step.

in reply to uriel238

Ehh, I mean, it's not really surprising it knows how to lie and will do so when asked to lie to someone as in this example (it was prompted not to reveal that it is a robot). It can see lies in its training data, after all. This is no more surprising than "GPT can write code."

I don't think GPT4 is skynet material. But maybe GPT7 will be, with the right direction. Slim possibility but it's a real concern.

This entry was edited (1 week ago)
in reply to AJ Sadauskas

Hmmmm, I'm seeing replies not engaging with the arguments in this link & the paper it cites...
in reply to AJ Sadauskas

We finally have perfect #mirrors.

To see anything there at all in a mirror, there are three easy actions:

I can spend my time flexing.
I can make the time to shape myself up.
I can ask better questions.

Pick yours. Stop laying it on #LLMs. They only work with what humans put into them at that second. #Compassion for any struggle to make order out of the #other.

My #personal bullshit is an eternal #master #work.

#happy #fathers #day #hallucinations