Skip to main content


I keep seeing versions of this post, which imply a bizarre misunderstanding of how we know the world.

Do people imagine that if we'd never observed galaxies or neutrinos or exoplanets or the cosmic microwave background, we could have *imagined* these things & that would be just as real?

Or that we've magically reached the point, just now, where we no longer need to observe the world?

#science #nature #technology

in reply to Corey S Powell

@dahukanna I'm just thinking about all the discoveries that came from observing using those telescopes and then someone thought "hmm, that's weird".

And now we know about pulsars, that light has a speed, quantum mechanics, and whole bunch of other apps for which there was no precedent for LLMs to autocorrect on.

in reply to craignicol

@craignicol @dahukanna

Anyone who claims that autocorrect can drive science doesn't understand how science actually works.

in reply to Redish Lab

@adredish @craignicol @dahukanna
Replace aurocorrect with autofabricate and the point becomes even more poignant…
in reply to bk

@adredish @craignicol @dahukanna
…but these are the same minds who assumed you could cull research funding with large language models…
in reply to bk

@adredish @craignicol @dahukanna
…indeed one could make the opposite point that more Nobel Prizes have been awarded for advances in instrumentation than advances in theory…
in reply to bk

@knutson_brain @craignicol @dahukanna

Also, they're not arguing that they can supply theory which could then be tested by experiments. They're arguing that they can replace experiments. And they also believe that theory is irrelevant (they're wrong) as they argue (incorrectly) that LLMs are "atheoretical". (Oy!)

in reply to Redish Lab

@adredish @knutson_brain @dahukanna a brilliant way to miss that one of the fundamental problems of LLMs is that they're not grounded in reality.

Grounding is how humans learn. Try to catch a ball. Miss. Reflect. Try again. Miss. Reflect. Try again. Succeed.

Childhood (and any good adult) learns by experimentation. Model the world, test out an action in that model. If the world responds the same way, reinforce. If it doesn't, fix the model.

in reply to craignicol

@adredish @knutson_brain @dahukanna no matter how rich or powerful you are, you can never fix the world to match the model. But you can cause a lot of suffering if you try.
in reply to Redish Lab

@adredish @craignicol @dahukanna
Hoo boy...so we need no more theoretical or instrumentational advances... (I'm going to have to short that position)
This entry was edited (1 week ago)