Skip to main content

in reply to along_the_road

LLMs are good for searching in technical documentation. And that's it. They are barely usable outside this niche.

Stop using it for "humanitarian" purposes. It can't be a psychologist, lawyer or anything else similar.

in reply to Lembot_0006

They're good for pattern matching on smear tests, etc and have a detection rate much better than humans (+90% to~75%)
This entry was edited (2 weeks ago)
in reply to along_the_road

I mean if they are going to be doing model collapse, might as well go full throttle
in reply to sculd

Once grok starts reading from chat gpt, unless it already is, then they're 69ing!
This entry was edited (2 weeks ago)
in reply to along_the_road

The whole article is fixated on Grok being far right and never seems to care that an LLM is citing another LLM instead of an actual source.
in reply to morrowind

Weeks rather than months or years!

Most SM users read like bots so we're almost at critical mass of dead internet

This entry was edited (2 weeks ago)
in reply to Stepos Venzny

I disagree. The focus of the article is misinformation. It's literally the first sentence:

The latest model of ChatGPT has begun to cite Elon Musk’s Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.


Yes, Grokipedia is right-wing. It was literally created to alter reality and spread lies that agree with their worldview! But the real problem is it can't be edited with sourced, fact-based information, instead AI generates everything. I think the article did explore the fact that it's one LLM depending on another...

in reply to Janx

If there was ever a difference between being far-right and being disinformation, there isn't one anymore.