Skip to main content


An excellent overview of the development in the world of #LLMs in the last year, put together by @simon in his "Things we learned about LLMs in 2024": simonwillison.net/2024/Dec/31/…. Remember the YouTube paradox where the engineers made the site faster, but globally overall load times went _up_ because suddenly more people could use it? I wonder if something like this could happen with LLMs and the environmental impact of prompts: individual prompts get cheaper, but overall energy consumption goes up.
in reply to Thomas Steiner :chrome:

yeah that's interesting - as the cost of running a prompt drops to almost nothing (seriously, $1.68 for 68,000 image captions!?) people will inevitably find all sorts of new uses for the models
in reply to Simon Willison

I'm not worried at all about people finding use cases (quite the opposite even), I'm more concerned about corporations finding unwanted-by-people "slob" use cases.
in reply to Thomas Steiner :chrome:

I hear you on "There is so much space for helpful education content here" and "Knowledge is incredibly unevenly distributed". Have you thought about contributing to #Wikimedia pages like en.wikipedia.org/wiki/Large_la… , e.g. by reviewing them and leaving comments on the talk page?
in reply to Daniel Mietchen

@EvoMRI I've actually been ramping up my Wikipedia contributions recently, I should think some more about that