Skip to main content


Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?

Yeah, about that:
theverge.com/tech/863209/meta-…

But today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! 🤡

#Metaverse #AI #Hype

This entry was edited (4 weeks ago)

reshared this

in reply to Michał "rysiek" Woźniak · 🇺🇦

💸 imagine the sensible and productive things that could have been done with the dozens(?) of billions burned for that stuff
in reply to grob (teeth era) 🇺🇦🏳️‍🌈🏳️‍⚧️

@grob Apparently it was $70ish billion in the end? 😬

So few people were using it, it cost abour $100,000 per active user and now that's going down the toilet too.

Bullshit consulting companies claimed it would be worth $5 trillion by 2030 🙄

mckinsey.com/capabilities/grow…

in reply to Michał "rysiek" Woźniak · 🇺🇦

There's a notorious analyst in the video game world called Pachter who keeps getting it spectacularly wrong but keeps getting employed by megacorporations and quoted seriously by the press.

He was finally asked about his poor track record by a brave journalist and he replied with something ridiculous like "We're here to make predictions, not tell you how the future will be". Reality seems to be meaningless to consulting companies.

This entry was edited (4 weeks ago)
in reply to FediThing

p.s. In retrospect I think what Pachter was trying to say between the lines was:

"Consulting is just there to provide arse-covering excuses for executives to make bad decisions. If executives get caught out making a mistake, they can point to consultants' predictions in order to avoid responsibility for getting it wrong."

In other words he was admitting consulting is just bullshit designed for the purpose of protecting C Suite, CEOs and billionaires. It has nothing to do with actually predicting the future.

This entry was edited (4 weeks ago)
in reply to Michał "rysiek" Woźniak · 🇺🇦

If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".

Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
nature.com/articles/s41586-024…

Every "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.

Which is why they desperately seek out any and all human-created text out there.

#AI #Hype

#ai #hype
This entry was edited (4 weeks ago)