Skip to main content


Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

tante.cc/2026/02/20/acting-eth…

reshared this

in reply to tante

Dunno where you got the idea that I have a "libertarian" background. I was raised by Trotskyists, am a member of the DSA, am advising and have endorsed Avi Lewis, and joined the UK Greens to back Polanski.
in reply to Cory Doctorow

@pluralistic My impression was, Tante meant this specific argument and the way it is structured, and the way it functions. I hold the both of you in high esteem, and I don't have the impression that he'd somehow characterize anything beyond that argument he discusses.
in reply to R.L. LE

@herrLorenz
> Cory shows his libertarian leanings here...

> Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background.

in reply to Cory Doctorow

@herrLorenz
This falls into the "you are entitled to your own opinions, but not your own facts" territory.
in reply to Cory Doctorow

@pluralistic I just spoke about my impression, but didn't lay claim to objective truth. I'll keep reading along. ✌️
in reply to Cory Doctorow

@pluralistic @herrLorenz that second example goes well into overreach territory, and I can see why you'd be not happy with it.

And/but a big part of libertarian appeal is that it does muddy how being "individually free from regulation" can be cast as liberatory. As if individual freedom is all that's needed. "I'm free when there are no regulations" is obviously shallow to lefties, but it (individual freedom) is also a component of why people are lefties, there's real overlap.

in reply to CJPaloma might be hiking

@CJPaloma @herrLorenz
There is no virtue in being constrained or regulated per se.

Regulation isn't a good unto itself.

Regulation that is itself good - drawn up for a good purpose, designed to be administrable, and then competently administered - is good.

in reply to Cory Doctorow

@pluralistic @herrLorenz Of course! Agreed.

The overlap ends around -when- reasons are "good" enough. Laws about how to treat other people are relatively easy.

But until enough people see rivers on fire, regulations on -doing certain things- aren't imposed, despite many people saying "hey, this isn't good" decades prior.

Not reining in/regulating until after -foreseeable- catastrophes results in all kinds of shit shows (from the MIC, to urban sprawl, to plastics, to tax laws, etc)

in reply to Cory Doctorow

Fair enough, but that's not the core of the argument
@tante made. He had the same complaint for starters (your argument was heavily drenched in 'you ppl are purists' ), but he also makes the valid argument that technology isn't neutral in itself. Open weights based on intellectual theft and forced labor is still a problem. Until we have a discussion on how the weights come to fruitition, LLM's are objectively problematic from an ethical view. That has nothing to do with purism.
This entry was edited (3 weeks ago)
in reply to tante

That doesn't seem to be the best idea @pluralistic

AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.

That's completely ignoring the environmental and human impacts of the AI bubble.

Try buying DDR memory, a GPU or an SSD / HDD at the moment.

in reply to Simon Zerafa (Status: 😊)

@simonzerafa
What is the incremental environmental damage created by running an existing LLM locally on your own laptop?

As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.

in reply to Cory Doctorow

@pluralistic

I am astonished that I have to explain this,

but very simply in words even a small child could understand:

using these products *creates further demand*

- surely you know this?

Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.

I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷

Massive loss of respect!

@simonzerafa @tante

in reply to Cory Doctorow

@pluralistic
Of course, I am speaking in generalities.

Encouraging the use of LLM's is counterproductive in so many ways, as I highlighted.

Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task 🙂

That's power that generated somewhere, even if it's with renewable energy.

The main issue with LLM's is that they don't encourage critical thinking, in a world which is already suffering from a massive shortage.

in reply to Simon Zerafa (Status: 😊)

@simonzerafa
As I wrote (and it seems you haven't read what I wrote, which is weird, because that seems like a good first step if you're going to criticize my conduct), I'm running Ollama on a laptop that doesn't even have a GPU.

Its power consumption is comparable to, say, watching a Youtube video.

I know this because my laptop is running free software that lets me accurately monitor its activity, and because the model is also free software.

in reply to Cory Doctorow

@simonzerafa
Checking for punctuation errors is does not discourage critical thinking. It's weird to laud "critical thinking" and also make this claim.
in reply to Cory Doctorow

@pluralistic @simonzerafa on this one for example I fully agree with Cory. This is not him having a genAI system write or anything like that.
in reply to tante

@pluralistic @simonzerafa I agree in principle with Cory, but I really wish that he had clarified that:

1. Ollama is not an LLM, it's a server for various models, of varying degrees of openness.
2. Open weights is not open source, the model is still a black box. We should support projects like OLMO, which are completely open, down to the training data set and checkpoints.
3. It's quite difficult to "seize that technology" without using Someone Else's Computer to do so (a.k.a clown/cloud)

in reply to David Huggins-Daines

@pluralistic @simonzerafa But ALSO: using a multi-billion-parameter synthetic text extruding machine to find spelling and syntax errors is a blatant example of "doing everything the least efficient way possible" and that's why we are living on an overheating planet buried under toxic e-waste.

If I think about it harder I could probably come up with a more clever metaphor than killing a mosquito with a flamethrower, but you get the idea.

This entry was edited (3 weeks ago)
in reply to David Huggins-Daines

@dhd6 @simonzerafa

No. It's like killing a mosquito with a bug zapper whose history includes thousands of years of metallurgy, hundreds of years of electrical engineering, and decades of plastics manufacture.

There is literally no contemporary manufactured good that doesn't sit atop a vast mountain of extraneous (to that purpose) labor, energy expenditure and capital.

in reply to Cory Doctorow

@pluralistic @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.

That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.

Rube Goldberg is spinning in his grave!

in reply to David Huggins-Daines

@dhd6 @simonzerafa

Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?

The nature of general purpose technologies is that they will be used for lots of purposes.

in reply to Cory Doctorow

@pluralistic @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.

Am I an old man yelling at a cloud?

No, it's the children who are wrong!

in reply to Cory Doctorow

@pluralistic @dhd6 @simonzerafa what a shit take dude. rockets being perfected by nazis, project paperclip, and now a neonazi in charge of one of the largest space tech programs on the planet, along with a bullshit generating LLM.

so yeah, maybe this is all fash tech, and maybe taking a stand of "I'm not touching that shit with a thousand-meter pole" is not "neoliberal purity culture". and ollama of all things? the shit pumped out by fucking Meta? are you shitting me?

in reply to elle

@elle @dhd6 @simonzerafa

"You used the wrong open model because I don't like the company that made it" is the actual definition of nonsense purity culture.

in reply to Cory Doctorow

@pluralistic @dhd6 @simonzerafa you wrote a book on how much of a shitbag company corpos like Meta are. now you're saying "oh it's not that bad, look it's marginally better than Google Docs spell checker"?! did someone hack your fucking account?

there are legitimately open models that originate from academic institutions, train on open data with full consent. even those models take tens-of-thousands of euros to train. well outside the resources available to most open-source enjoyers

in reply to Cory Doctorow

@pluralistic @elle @dhd6 @simonzerafa I beg to differ. Demand is a powerful and legit tool of people responding to corporate behavior. Choosing a different product because you dislike a maker's conduct is nought but the invisible hand of the market slapping that maker for their conduct. Provenience does matter.

Smearing choice of provenience as wilful purism would be the perfect argument for any company to disregard social or ethical standards, constituting a right to demand for the 'best' offer. Which would take us into the field of classic objectivism, and be in itself as willful and naive as the purism it accuses consumers of.

in reply to Cory Doctorow

@pluralistic @dhd6 @simonzerafa Good grief, these ad hoc rationalizations are absurd and you know it.

FYI, rockets are enormously environmentally destructive (fuel, pollution, noise, etc.). The planet would be better off with as few rockets launching as possible.

Saying an LLM is OK because some completely other "good" technology was invented by evil people is a *non argument*.

in reply to Cory Doctorow

@pluralistic @dhd6 @simonzerafa

The patent system is designed around an acknowledgement that we invent while "standing on the shoulders" of those who have gone before -- this premise is built into the patent system; policy decisions support this approach.

Patent grants exclusive rights to the inventor for a limited time. In exchange, the.inventor must disclose how a person versed in the arts can replicate that invention.

The same is not true of literature / copyright. Different animal, different approach.

in reply to David Huggins-Daines

@dhd6 @pluralistic @simonzerafa IMHO this is already going down the wrong path.

If you follow anything I write or boost, you'll quickly note that I'm very vocal against AI. But that is a shorthand; my actual position is that I'm fine with the *tech*, strongly dislike the *waste* (where applicable), but my actual complaint is that the AI bubble is literally a fascist project.

Outside of FOMO, every reason people use or promote AI based things in this bubble is designed to...

in reply to Jens Finkhäuser

@dhd6 @pluralistic @simonzerafa ... disenfranchise people, by partially replacing them with a machine that imitates their work. And unlike people, machines can be owned.

Their output functions like a natural resource (except it's not natural), and there is insurmountable historic precedent that this promotes tyrannies. The TL;DR of it being that when you can mine natural resources, you are less reliant on a fed, educated, healthy, mobile population - so public spending becomes a waste.

in reply to Jens Finkhäuser

@dhd6 @pluralistic @simonzerafa The problem isn't ingesting text from the web. The problem isn't using this to generate new text, or spell check existing text.

The problem is that capitalist logic demands that this is used to move "value" from the general population to property oligarchs own. Marx would have started talking about labour here.

That this promotes fascism is certainly the effect, and when you look at those who stand to win, probably also the reason.

in reply to Jens Finkhäuser

@dhd6 @simonzerafa So, yes, whether something is or isn't open plays into that, and I get the complaint.

But at the same time, it's a distraction.

The general position @pluralistic holds in the blog post is very much in line with distinguishing between the tech and the bubble.

Personally, I feel like responding to that with "yeah, but it's not good enough" is a very good example of the kind of Leftist purity culture that is so, so effective at hindering collaboration.

in reply to Ray McCarthy

@raymaccarthy @simonzerafa
I see. And do you have moral opinions about whether people should use Google Docs? Do you seek out strangers to tell them that it's dangerous to use Google Docs?
in reply to Cory Doctorow

@pluralistic @simonzerafa
"What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"

I dunno. But how about a couple of million people?

The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.

Let's all do what Cory does!
☠️
Meanwhile:
technologyreview.com/2025/05/2…
#doomed #ClimateChange

in reply to Cory Doctorow

@pluralistic @simonzerafa
Missed the point, sir.

When one person does it...no big deal.

When a couple of million people do it...well, see the MIT article above.

in reply to Kid Mania

@pluralistic @simonzerafa
Subhead quote from the article:
"The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."
in reply to Kid Mania

@clintruin @simonzerafa
You are laboring under a misapprehension.

I will reiterate my question, with all caps for emphasis.

Which "couple million people" suffer harm when I run a model ON MY LAPTOP?

in reply to Cory Doctorow

@pluralistic @simonzerafa
I'll reiterate my response.

When you *alone* do it...no big deal.
When a couple of million do it ON THEIR OWN LAPTOPS...problem.

in reply to Kid Mania

@clintruin @simonzerafa
OK, sorry, i was under the impression that I was having a discussion with someone who understands this issue.

You are completely, empirically, technically wrong.

Checking the punctuation on a document on your laptop uses less electricity than watching a Youtube video.

in reply to Cory Doctorow

@pluralistic @simonzerafa
Fair enough, Cory. You're gonna do what you want regardless of my accuracy or inaccuracy anyway. And maybe I've misunderstood this. The same way many many will.

But visualize this:

"Hey...I just read Cory Doctrow uses an LLM to check his writing."
"Really?"
"Yeah, it's true."
"Cool, maybe what I've read about ChatGPT is wrong too..."

in reply to Kid Mania

@clintruin @simonzerafa
This is an absurd argument.

"I just read about a thing that is fine, but I wasn't paying close attention, so maybe something bad is good?"

Come.

On.

in reply to Kid Mania

@clintruin @simonzerafa
Well, you could "do what Cory does" by familiarizing yourself with the conduct that you are criticizing before engaging in ad hominem.

To be fair, that's not unique to me, but people who fail to rise to that standard are doing themselves and others no good.

in reply to Cory Doctorow

I hate to dive into what is clearly a heated debate, but I want to add an answer to your question with a perspective that I think is missing: the power consumption for inference on your laptop is probably greater than in a datacenter. The latter is heavily incentivized to optimize power usage, since they charge by CPU usage or tokens, not watt-hours. (Power consumption != environmental damage exactly, but I have no idea how to estimate that part.)
This entry was edited (3 weeks ago)
in reply to twifkak

@twifkak @simonzerafa
parsing a doc uses as much juice as streaming a Youtube video and less juice than performing a gnarly transform on a hi-rez in the Gimp.

I measured.

in reply to Simon Zerafa (Status: 😊)

@simonzerafa @pluralistic
At best 40% junk, but unless you are so expert you don't need it, you can't know which is plausible rubbish.
Would you play Russian Roulette every day for hours?
This entry was edited (3 weeks ago)
in reply to Ray McCarthy

@raymaccarthy @simonzerafa
Again, what does checking the punctuation on a single essay per day have to do with "play[ing] Russian Roulette every day for hours?"
in reply to tante

I really like and admire @pluralistic and have utmost respect for him, and that's why I'm totally baffled about why he is claiming "fruit of the poisoned tree" arguments as cause of LLM scepticism.

The objections to LLMs aren't about origins but about what they they are doing right now: destroying the planet, stealing labour, giving power over knowledge to LLM owners etc.

The objections are nothing to do with LLMs' origins, they're entirely about LLMs' effects in the here and now.

This entry was edited (3 weeks ago)
in reply to FediThing

@FediThing
Which parts of running a model on your own laptop are implicated in "destroying the planet?" How is checking punctuation "stealing labor?" Or, for that matter "giving power over knowledge to LLM owners?"
in reply to Cory Doctorow

(Hello Mr Doctorow! Just want to make clear I admire you a great deal and this isn't intended as an attack on you!)

Running a local LLM with no connection to outside providers might be a way of avoiding bad stuff, but I am not clear on how this relates to discussing origins of technologies?

It seems like there's ambiguity in your post about whether it applies just to people with homelabs wondering if they should try offline LLMs, or whether you are discussing LLMs as a general technology?

Almost everyone using LLMs will use the online kind, so objections to LLMs are (reasonably IMHO) based on that scenario.

This entry was edited (3 weeks ago)
in reply to FediThing

@FediThing
> I am not clear on how this connects to discussing origins of technologies

Because the arguments against running an LLM on your own computer boil down to, "The LLM was made by bad people, or in bad ways."

This is a purity culture standard, a "fruit of the poisoned tree" argument, and while it is often dressed up in objectivity ("I don't use the fruit of the poisoned tree"), it is just special pleading ("the fruits of the poisoned tree that I use don't count, because __").

in reply to Cory Doctorow

@FediThing
> Almost everyone using LLMs will use the online kind, so objections to LLMs are (reasonably IMHO) based on that scenario.

Except that in this specific instance, you are weighing on an article that claims that it is wrong to run a local LLM for the purposes of checking for punctuation errors.

in reply to Cory Doctorow

Thank you for the responses 🙏

"Because the arguments against running an LLM on your own computer"

...ahhh okay. So was this post aimed more at a very narrow homelab kind of audience?

It's just, as a reader, the article's emphasis on examples of tech origins imply it's trying to defend LLMs in general? This probably is my ignorance as a reader, but it's how it came across to me, and led to bafflement.

This entry was edited (3 weeks ago)
in reply to Cory Doctorow

Thanks. Can totally see how that makes sense at a technical level for people who run their own offline services.

I think it's the ambiguity that is driving the discourse over this post. People are taking the "refusing to use a technology" section as a defence of LLMs in general?

If the angle was caging LLMs or something like that, it might make it clearer that you aren't endorsing the most common form of LLM?

Anyway, it's your call on this as author, just wanted to feed back on this because your writing matters and I hope feedback is helpful to it.

This entry was edited (3 weeks ago)
in reply to FediThing

in reply to Shiri Bailem

@shiri
(Untagged Cory as I'm sure he is getting a lot of replies and I don't want to repeat myself at him.)

I don't think it's the first part that caused problems but the later parts, as they didn't explicitly mention offline LLMs and it was possible to read the later text as referring to all LLMs.

in reply to FediThing

@FediThing The link in question where he talked about it, and did explicitly say it, though he didn't use the "offline" label specifically he basically described it as such. (The label itself is not purely self explanatory, so wouldn't have helped much)

Here's the article link: pluralistic.net/2026/02/19/now…

On friendica the thumbnail of the page is what I've attached here, incidentally the key paragraph in question.

Screenshot of text reading: There is one technology that has made my POSSE life better, and it might surprise you. This year, I installed Ollama – an open-source LLM – on my laptop. It runs pretty well, even without a GPU. Every day, before I run Loren's python publication scripts, I run the text through Ollama as a typo-catcher (my prompt is "find typos"). Ollama always spots three or four of these, usually stuff like missing punctuation, or forgotten words, or double words ("the the next thing") or typos that are still valid words ("of top of everything else").

@tante

in reply to Shiri Bailem

Yup, that's the start, but then the text goes onto a discussion of very broad technologies and refusal to use them, which is where the ambiguity sort of creeps in. It isn't clear in the later sections if it's referring to LLMs in general, or just the very specific niche of offline LLMs.

I'm not posting this to attack Cory but to give feedback as a reader. I (incorrectly) took him to be talking about LLMs in general in the later section of the post, and it's possible other people are interpreting the later sections in the same way.

This entry was edited (3 weeks ago)
in reply to FediThing

in reply to prince lucija

@prinlu @FediThing @pluralistic I agree with most things said in this thread, but on a very practical level, I'm curious what training data was used for the model used by @pluralistic 's typo-checking ollama?

for me, that training data is key here. was it consensually allowed for use in training?

because as I understand, LLMs need vast amounts of training data, and I'm just not sure how you would get access to such data consensually. would love to be enlightened about this :)

in reply to bazkie 👩🏼‍💻 bitplanes 🎵

@bazkie @prinlu @FediThing
I do not accept the premise that scraping for training data is unethical (leaving aside questions of overloading others' servers).

This is how every search engine works. It's how computational linguistics works. It's how the Internet Archive works.

Making transient copies of other peoples' work to perform mathematical analysis on them isn't just acceptable, it's an unalloyed good and should be encouraged:

pluralistic.net/2023/09/17/how…

in reply to Cory Doctorow

This would be my take:

Search engines direct people to the work they index. They reward labour by directing people towards it.

Scraping without consent for training data lets people reproduce the work without crediting or rewarding the people who actually did the labour. That seems like labour theft?

If it is labour theft, then it isn't sustainable and that's part of why LLMs are so questionable as a technology.

This entry was edited (3 weeks ago)
in reply to FediThing

@FediThing @bazkie @prinlu
There are tons of private search engines, indices, and analysis projects that don't direct text to other works.

I could scrape the web for a compilation of "websites no one should visit, ever." That's not "labor theft."

in reply to Cory Doctorow

@pluralistic @bazkie @prinlu
Indexing works is a totally different thing to creating knock-offs of works, surely?

What Miyazaki said about AI knock-offs surely illustrates the difference?

in reply to FediThing

@FediThing @bazkie @prinlu
No one is defending "creating knock offs of works." Why would you raise it here? Who has suggested that this is a good way to use LLMs or a good outcome from scraping?
in reply to Cory Doctorow

@FediThing @bazkie @prinlu
The argument was literally, "It's not OK to check the punctuation in *your own work* if the punctuation checker was created by examining other peoples' work, because performing mathematical analysis on other peoples' work is *per se* unethical."
in reply to Cory Doctorow

@pluralistic @FediThing @prinlu I'd say "because performing [automated, mass scale] mathematical analysis on other peoples' work [without their consent] [with the goal of augmenting one's own work] is *per se* unethical" - and in that case, it's a statement I would agree with.
in reply to bazkie 👩🏼‍💻 bitplanes 🎵

@bazkie @FediThing @prinlu
You've literally just made the case against:

* Dictionaries
* Encyclopedias
* Bibliographies

And also the entire field of computational linguistics.

If that's your position, fine, we have nothing more to say to one another because I think that's a very, very bad position.

in reply to Cory Doctorow

I did not make that case, if you'd properly read my [additions] to the statement.

making dictionaries etc isn't automated on mass scales like feeding training data to LLMs is.

it's a very human job that involves a lot of expertise and takes a lot of time.

This entry was edited (3 weeks ago)
in reply to Cory Doctorow

@pluralistic @bazkie @FediThing @prinlu I think part of the issue here is that GenAI is being pushed so hard and fast *everywhere* that's it's hard to be nuanced about what narrow use-cases might be acceptable or not.

We're living under a massive pro-LLM propaganda campaign. They have already set the terms of the debate with a maximalist position. It's no surprise that the backlash is similarly absolute.

in reply to Cory Doctorow

@pluralistic
No, because dictionnaries are about language which is a shared common, encyclopedias are about knowledge, which is a shared common, and bibliohraphies are a list of works, not a derivative.

Knowledge, language and a list of works cannot be copyrighted. You can use language, knowledge, words from the dictionary. You can quote an encyclopedia when refering to the source. None of that is even relevant to this discussion.

@bazkie @FediThing @prinlu @tante

in reply to Cory Doctorow

@pluralistic @bazkie @FediThing @prinlu encyclopedias don't say: look i made a painting, wanna buy?
but all that is kinda offtopic, imo
there are plenty of reasons to not use LLM, besides that commons/copyright stuff, wich are not purist and very much based on real-world-issues. in a perfect world, theft won't be an issue (imo), because we had overcome money and fossil energy. but using genLLM still would be morally wrong, because of its dangers, due to bias and failures. …/
in reply to Cory Doctorow

@pluralistic
The argument was "without the consent of the creators of said works." And you know that.

Don't be just another debate bro. Please.

@FediThing @bazkie @prinlu @tante

in reply to Cory Doctorow

If LLMs were only used for checking grammar that is one thing.

But by far the most common use of LLMs is labour theft through creating knock-offs, and that's something else.

I think the concern is that training data useful for the first case could be useful for the second case too? Hence the questions about where the training data comes from and where it ends up.

Kind of feels like it needs to be strictly ringfenced if it's to be ethical?

This entry was edited (3 weeks ago)
in reply to FediThing

@FediThing @bazkie @prinlu
Once again, you a replying to a thread that started when someone wrote that using an LLM to check the punctuation in your own work is ethically impermissible because no one should assemble corpora of other peoples' works for analytical purposes under any circumstances, ever.
in reply to Cory Doctorow

@pluralistic @FediThing @prinlu sure, but I'm responding here specifically to your statement that scraping for training isn't unethical per se.
in reply to Cory Doctorow

@pluralistic @bazkie @prinlu
I guess the question is if such data is assembled for a legitimate purpose, are there safeguards to stop the same data being used for an illegitimate purpose?

If there aren't any safeguards, then there's a danger the legitimate purpose is used as a shield/figleaf for illegitimate stuff?

in reply to Cory Doctorow

@pluralistic @prinlu @FediThing I think the difference to search engines is how LLM reproduces the training data..

as a thought experiment; what if I'd scrape all your blogposts, then start a blog that makes Cory Doctorow styled blogposts, which would end up more popular than your OG blog since I throw billions in marketing money at it.

would you find that ethical? would you find it acceptable?

further thought experiment; lets say you lose most of your income as a result and have to stop making blogs and start flipping burgers at mcDonalds.

your blog would stop existing, and so, my copycat blog would, too - or at least, it would stop bringing novel blogposts.

this kind of effect is real and will very much hinder cultural development, if not grind it to a halt.

that is a problem - this is culturally unsustainable.

in reply to bazkie 👩🏼‍💻 bitplanes 🎵

First: checking for punctuation errors and other typos *in my own work* in a model running on *my own laptop* has nothing - not one single, solitary thing - in common with your example.

Nothing.

Literally, nothing.

But second: I literally license my work for commercial republication and it is widely republished in commercial outlets without any payment or notice to me.

This entry was edited (3 weeks ago)
in reply to Cory Doctorow

but then you consented to that, right? you are in control of that.

also my example IS similar - after all, it's data scraped without consent, used to create another work. the typo-checker changes your blogpost based on my training data, in the same way my copycat blog changes 'my' works based on your training data.

sure, it's on a way different scale - deliberately, to more clearly show the principle - but it's the same thing.

This entry was edited (3 weeks ago)
in reply to bazkie 👩🏼‍💻 bitplanes 🎵

@bazkie

Should we ban the OED?

There is literally no way to study language itself without acquiring vast corpora of existing language, and no one in the history of scholarship has ever obtained permission to construct such a corpus.

in reply to Cory Doctorow

in reply to Cory Doctorow

@pluralistic @bazkie

Dictionaries reference the sources they use for examples in the entries themselves.

LLMs lose the references at training time.

You've got this dead wrong.

in reply to Cory Doctorow

@pluralistic After reading so many comments, it is pretty clear who here would be opposing the creation of Napster and torrenting and be defending RIAA... They are also clearly very much against Internet Archive, shadow libraries, etc, simply because they can't take any disagreement.

Who knew running a local LLM, that uses the same energy as watching a youtube video, to spellcheck your own work would bring out such a mob.

in reply to Cory Doctorow

@pluralistic @FediThing you’re attempting to legitimize use of an unethical technology for something you don’t actually need a plausible-sounding-wall-of-text generator for

it goes beyond “it’s made by bad people in bad ways”. it’s a “”tool”” that actively causes cognitive decline and psychosis and sucks the soul out of everything it touches. and mind you promoting and legitimizing it is an act of support for those bad people and their bad ways. your deflection is a typical that of someone with no regard for ethics

“I installed Ollama” instantly gives a person away as a techbro

  • your not-so-friendly not-so-neighborhood “””liberal”””
in reply to zivi

@zaire @FediThing
I'm not a liberal, I'm a leftist, so perhaps this is why I disagree with you.

The argument that "something is unethical because someone else used it in an unethical way" is so incoherent that it doesn't even rise to the level of debatability.

in reply to Cory Doctorow

So this is some kind of spell-checker, which is already in LibreOffice? I'm not sure why I would use that instead.

I use offline AI, esp for visual effects, subtitles, fixing dialogue errors, etc. There are "deep fake technologies" useful for mocap, camera tracking, and such other tedious works. They don't use prompts, and don't generate art, and are trained on your own inputs.

Perhaps we need a new name to differentiate it from the online genAI tech.

This entry was edited (3 weeks ago)
in reply to Cory Doctorow

@pluralistic @FediThing
What's the difference between your argument here and "Slavery is OK because I didn't kidnap the slaves; I just inherited them from my dad." ??
in reply to Mark Saltveit

@taoish @FediThing
Because there are no slaves in this instance. Because no one is being harmed or asked to do any work, or being deprived of anything, or adversely affected in *any articulable way*.

But yeah, in every other regard, this is exactly that enslaving people.

Sure.

in reply to Cory Doctorow

@pluralistic @FediThing
Unless you consider stolen intellectual property (and ongoing copyright violations) a harm, a deprivation, &c.

But your general analogy against "fruit of the poison tree" morality would seem to also apply in the case of slavery -- in my hypothetical, the person didn't enslave anyone. They just inherited a slave from someone who did. That is indeed "fruit of a poisoned tree", even if they just continued an existing enslavement.

We have a real world recent example -- the cell lines stolen from Henrietta Lacks. Do you dismiss any moral concerns about using her cell line without consent as a neo-liberal moral purity trap?

in reply to Mark Saltveit

@taoish @FediThing
Scraping and training are not copyright infringements:theguardian.com/us-news/ng-int…
in reply to Cory Doctorow

I think you can answer these questions yourself.

Suppose you wore a coat made out of mink fur. The minks are already dead, simply wearing the coat won't kill more minks. What does wearing mink fur have to do with cruelty to minks?

Suppose you live in the time of the Luddites. Legislation prohibits trade unions and collective bargaining. Mill owners introduce machines, reducing wages. But you build your own machine. Problem solved? You helping labor or capital?

@FediThing @tante

This entry was edited (3 weeks ago)
in reply to Nelson

@skyfaller @FediThing
This is a "fruit of the poisoned tree" argument.

Suppose you use a computer to post to Mastodon, despite the fact that silicon transistors were invented by the eugenicist William Shockley, who spent his Nobel money offering bribes to women of color to be sterlized?

Suppose you sent that Mastodon post on a packet-switched network, despite the fact that this technology was invented by the war criminals at the RAND corporation?

in reply to Cory Doctorow

@skyfaller @FediThing
Also, you're wrong about the Luddites, just as a factual matter. The guilds the Luddites sprang from weren't prohibited by law, they were *protected* by law, and the Luddites' cause wasn't about gaining new protections under statute, but rather, enforcing existing statutory protections.

(Also: the Luddites didn't oppose steam looms or stocking frames; their demands were for fair deployment of these)

in reply to Cory Doctorow

@pluralistic Thank you for the fact check. I was paraphrasing that text from the popular Nib comic: thenib.com/im-a-luddite/

If this contains factual inaccuracies I will need to do more research and perhaps stop sharing that comic.

@FediThing @tante

in reply to Nelson

@skyfaller @FediThing I strongly recommend Brian Merchant's "Blood in the Machine" as the best modern history of the Luddites.
in reply to Cory Doctorow

@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.

For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.

@FediThing @tante

in reply to Nelson

@skyfaller @FediThing
No. Literally the same LLM that currently finds punctuation errors will continue to do so. I'm not inventing novel forms of punctuation error that I need an updated LLM to discover.
in reply to Cory Doctorow

@pluralistic Ok, fair enough, if spell checking is literally the only thing you use LLMs for.

I still think you wouldn't rely on a 1950s dictionary for checking modern language, and language moves faster on the internet, but I'm willing to concede that point.

I still think a deterministic spell checker could have done the job and not put you in this weird position of defending a technology with wide-reaching negative effects. But I guess your post was for just that purpose.

@FediThing @tante

in reply to Nelson

@skyfaller @FediThing
I'm not using it for spell checking.

Did you read the article that is under discussion?

in reply to Cory Doctorow

@pluralistic I apologize, I did in fact read the relevant section of your post, and I was using spell-checking as shorthand for all typo checking, because deterministic grammar checkers have also existed for some time, although not as long as spell checkers and perhaps they have not been as reliable. I understand that LLMs can catch some typos that deterministic solutions may not.

I just think we should put more effort into improving deterministic tools instead of giving up.

@FediThing @tante

in reply to Nelson

This is precisely it; it's about the process, not their distance from Altman, Amodei, et al. (which the Ollama project and those like it achieve).

The LLM models themselves are, per this analogy, still almost entirely of the mink-corpse variety, and I think it's a stretch to scream "purity!" at everyone giving you the stink eye for the coat you're wearing.

It's not impossible to have and use a model, locally hosted and energy-efficient, that wasn't directly birthed by mass theft and human abuse (or training directly off of models that were). And having models that aren't, that are genuinely open, is great! That's how the wickedness gets purged and the underlying tech gets liberated.

Maybe your coat is indeed synthetic, that much is still unclear, because so far all the arguing seems to be focused on the store you got it from and the monsters that operate the worst outlets.

in reply to Correl Roush

@correl @skyfaller @FediThing
More fruit of the poisoned tree.

"This isn't bad, but it has bad things in its origin. The things I use *also* have bad things in their origin, but that's OK, because those bad things are different because [reasons]."

This is the inevitable, pointless dead-end of purity culture.

in reply to Cory Doctorow

@pluralistic This seems like whataboutism. Valid criticisms can come from people who don't behave perfectly, because otherwise no one would be able to criticize anything. Similarly, we can criticize society while participating in it.

The point I'd like to make (that doesn't seem to be landing) is that LLMs aren't just made by bad people, but are also made through harmful processes. Harm dealt mostly during creation can be better than continuing harm, but still harmful.
@correl @FediThing @tante

in reply to Nelson

@pluralistic @correl @FediThing In the climate crisis we are often concerned about "embodied emissions", things made with fossil fuels that may not use fossil fuels once they're created. If we don't change our fossil fuel using production systems, those embodied emissions could be enough to kill us.

I'd say that the literal and figurative embodied emissions of even local LLMs are sufficient to make them problematic to use. Individuals avoiding them is insufficient but necessary.

in reply to Nelson

@skyfaller @correl @FediThing
That is completely backwards.

The entire point of measuring embodied emissions is to *make use of things that embody emissions*.

We improve old, energy inefficient buildings *because they represent embodied emissions* rather than building new, more efficient buildings because the *net* emissions of building a new, better building exceed the emissions associated with a remediated, older building.

in reply to Cory Doctorow

@pluralistic You're missing my point. Old houses should be used, but if new houses are built using fossil fuels, then we can cook ourselves by building them even if new buildings are fully electrified.

It feels like you're ignoring the context where LLMs are still being created. It's ethically different to use something made by slaves if slavery is not in the past. If you golf on a golf course maintained by prison labor yesterday, it matters that prisoners will clean it again tomorrow.

@correl

in reply to Nelson

@skyfaller @correl

I'm not ignoring that context, it is *entirely irrelevant*, because I am *not* using some prospective, as-yet-to-be-trained LLM to check punctuation on my laptop. I am using an *actual, existing* LLM.

So if your argument is, "If you did something that's not the thing you've done, that would be bad," my response is, "Perhaps that's true, but I have no idea why you would seek to a stranger to discuss that subject."

in reply to Nelson

@skyfaller @correl @FediThing
Yes, that is just more fruit of the poisoned tree.

This thing harmed people in its creation, therefore the thing is bad, as are all things derived from it.

However, the things *I* use don't count, because the bad things in their history are different because [insert incoherent rationalization].

in reply to Cory Doctorow

While I can understand your argument and almost certain exhaustion at hollow criticism, that response feels very dismissive of the points being made against your application of that argument.

I'm not sure how fruitful of an argument can be had with regard to what you may or not be using, as you really haven't clarified that anyhow besides locally hosted software that could be used to run terrible models, so this whole mess is just an endless back and forth of "You seem to be dodging the nature of the evil you may be accepting" vs "You're over-concerned with purity", and I think that's justifiably leaving a bad taste in everyone's mouth.

in reply to Correl Roush

@correl @skyfaller @FediThing
> as you really haven't clarified that anyhow

I'm sorry, this is entirely wrong.

The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.

I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.

in reply to Cory Doctorow

Chiming in to state that it's routine to re-state and re-re-state principles that get lost in long reads and long threads such as this one, where any late-comer needs to skim because of the tl;dr factor. There's a long standing principle based on this phenomenon: tell them in short what you're saying, explain what you said, then tell them in summary what you said.
in reply to Radio Free Trumpistan

@claralistensprechen3rd @skyfaller @FediThing @correl

I don't know what this has to do with someone stating "you haven't clarified" something, when you have.

Also, I have reposted the paragraph in question TWICE this morning.

in reply to Cory Doctorow

Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.

This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.

To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.

in reply to Cory Doctorow

@pluralistic i'd start with the part that the model probably came pre-trained. Or was it trained by you on your laptop...? @FediThing @tante
in reply to Lupino

@LupinoArts @FediThing
This is a purity culture argument about the "fruit of the poisoned tree." The silicon in your laptop was invented by a eugenicist. The network your packets transit was invented by war criminals. The satellite the signal travels on was launched on a rocket descended from Nazi designs that were built by death-camp slaves.
in reply to Cory Doctorow

@LupinoArts @FediThing
To be clear, I completely reject this argument as a form of special pleading. Everyone has a reason why *their* fruit of the poisoned tree is OK, but other peoples' fruit of the poisoned tree is immoral.
in reply to Cory Doctorow

@pluralistic i guess this misses the point: the particular chip in my laptop wasn't made by war criminals (i hope...), but the model you do use was trained under vast amounts of energy and water consumption. I'm not sure this is completely comparable, tbh.

@FediThing @tante

in reply to Lupino

@pluralistic and yes, i'm aware that producing a chip also costs vast amounts of energy and water... but at least my chip is used to solve a multitude of purposes, while a LLM that checks spelling and grammar is built and trained for one single use-case (that, nb, could also be done without an LLM). So yes, I do differenciate. @FediThing @tante
in reply to Lupino

@LupinoArts @FediThing
Llama 2 was not built to check spelling and grammar. That's "not even wrong."
in reply to Lupino

@LupinoArts @FediThing
No, this is just more "fruit of the poisoned tree" and your argument that your fruit of the poisoned tree doesn't count is the normal special pleading that this argument always decays into.
in reply to Cory Doctorow

@pluralistic sorry, i'm just not good at making a point. To me, not "LLM" is the "forbidden fruit", but "using an LLM for certain purposes" is. I think there are actually use-cases for stochastic inference machines (like folding proteins or structuring references), but, as @tante wrote (better: as I understand him), there are use-cases that one very much can reject in its entirety. And that should be okay.
in reply to Lupino

@LupinoArts
I never denied the existence of "use-cases that...one can reject it its entirety."
in reply to Cory Doctorow

The problem with AI is not primarily with tech itself, but human traits of laziness, greed and ignoring unpriced externalities for short-sighted personal gains. Those of course exist even without AI, but AI allows for their damage to be multiplied many thousandfold which upgrades it from minor annoyance to existential crisis.

And the problem with Cory saying he's preferring using his AI is not exact cost of him using AI model (which is tiny), it is NORMALIZING 1/3

This entry was edited (2 weeks ago)
in reply to Cory Doctorow

@pluralistic @Colman @FediThing
This is...disappointing. To be fair, I'm disappointed in almost everyone in this thread for engaging in schoolyard shit throwing, but you're much higher in status and your shit sticks. Have a conversation. Figure out where these views can comingle. Find common understanding or you risk using your high status to fracture an already unstable alliance of people who want technology to operate safely and for the benefit of our shared humanity.

Do better.

in reply to Colman Reilly

@Colman @FediThing @pluralistic this is just a dumb attack on Cory as a person which I will not accept. You can talk about what he or I wrote (both things can be criticized) but have some respect
in reply to tante

Yup. Cory Doctorow has done so much good by turning complex important topics into easy-to-grasp concepts like enshittification, it's made the debate over tech much richer and more widely held.

I'm not attacking him personally or his work.

This entry was edited (3 weeks ago)
in reply to FediThing

@FediThing … Certainly this is true of my reasoning for a #noAI stance. For me it's about climate, economic and social impacts of the ever growing mega-LLMs, and the craze to use them for all kinds of purposes for which they are unfit.

I am much less concerned with a local instance checking a writer's grammar. Lumping those two together makes little sense, to me.

On some other topics, I find @pluralistic's leadership constructive and helpful.

@tante

in reply to FediThing

@FediThing @pluralistic Some people - in fact quite a lot; if my reading is correct - do indeed argue that LLMs can *never* be ethically used because they are “trained on stolen work”.
in reply to Cory Doctorow

It really depends a bit on the details, doesn't it. If I copy a CD, I also perform some mathematical analysis on it, error checking etc. Maybe I even make a non-exact copy by passing it through some filter to make it sound better. But it's totally different from listening to a bunch of Beatles songs and then getting inspired to write my own songs in a similar style.
This entry was edited (3 weeks ago)
in reply to Cory Doctorow

@pluralistic @ianbetteridge @FediThing
It's still profit loss damage curable by income transfer if the illegally acquired data was used to create that profit. Dataset prominence should provide the percentage of profits and prominence is data size but also inference casualty. The primary literature should not be able to be diluted with free intellectual property.

I don't know if any of this is actual case law and I'm not a lawyer.

This entry was edited (3 weeks ago)
in reply to David

@drdrowland @ianbetteridge @FediThing
You're talking about ways of using models, not the creation of models. It's possible to make a model that does illegal things. But training a model is not illegal.
in reply to Cory Doctorow

@pluralistic @ianbetteridge @FediThing “Mathematical analysis” is doing a lot of work here. It could mean gathering meaningless statistics. Or it could mean capturing the qualities (deviations from the average) that make a particular work of art (or author) special, creative, surprising—for use in simulacra.

I think that's harmful, to the culture as a whole, if not to the artworks and artists getting regurgitated.

in reply to James Gleick

@gleick @ianbetteridge @FediThing
Let's stipulate to that (I don't agree, as it happens, but that's OK). It's still not a copyright infringement to enumerate and analyze the elements of a copyrighted work.

For the record, I think AI art is bad and neither consume nor make it.

in reply to Cory Doctorow

@pluralistic @ianbetteridge @FediThing I'm not claiming that's copyright infringement. Even if one respects the general framework of copyright, which I know you don’t, it seems hopeless to apply it to this AI mess.

But there is a kind of theft here. Not that it's actionable or measurable. But it’s nontrivial. It's related to questions of impersonation. It's an assault on individuality. Whatever your reasons for thinking AI art is bad (I have some sense), it's related to that, too.

in reply to Cory Doctorow

@pluralistic @gleick @ianbetteridge @FediThing there's been documented cases of LLMs regurgitating stuff from their training set verbatim, which clearly IS copyright infringement; and that means some parts of the training set are.encodrd in the weights of the model, which looks like publishing a copyrighted work to me. If publishing a JPEG of an image without copyright to it would be infringing, isn't publishing a model that can recreate something also infringing?
in reply to Cory Doctorow

@pluralistic @ianbetteridge @FediThing If that “mathematical analysis” regurgitates near verbatim works created by other people, it certainly is committing IP theft, and LLMs will happily do that. The “mathematical analysis” is effectively a form of lossy compression on its training data which a prompt can later extract.
in reply to Bruno Nicoletti

@bjn @ianbetteridge @FediThing
Once again, you're talking about *using* a model, not training a model.

Also "IP theft" isn't a thing. Perhaps you mean copyright infringement?

in reply to Cory Doctorow

@pluralistic @ianbetteridge @FediThing I’ll give you pedant points for copyright infringement, which is what most people mean by “IP theft”. As for training/using, the difference is somewhat moot. The models are trained to be used, and if trained on copyrighted data without a license, you’ve encoded that data into the model which might then regurgitate it thus facilitating copyright infringement.
in reply to Bruno Nicoletti

@bjn @ianbetteridge @FediThing it is a bedrock of copyright law that devices 'capable of sustaining a substantial non-infringing use' are lawful. Decided in 1984 (SCOTUS/Betamax) and repeatedly upheld.

It is categorically untrue that merely because a model's output can infringe copyright that the model is therefore illegal.

There's not much that's truly settled in American limitations and exceptions, but this is.

in reply to Cory Doctorow

@bjn @ianbetteridge @FediThing and as befits UK fair dealing (and related limitations and exceptions), we've had opinions from IPREG affirming that training a model doesn't infringe.
in reply to Cory Doctorow

Then the laws are not fit for purpose. The whole point of copyright is to encourage people produce works by being sure they get the benefit of those works. If my works can be encoded into a bunch of matrix weights and reproduced without attribution let alone financial recompense, then why should I bother? Google is doing its best to effectively steal the bread out of creators mouths with its AI summaries. It may be legal, but it stinks.
This entry was edited (3 weeks ago)
in reply to Bruno Nicoletti

@bjn @ianbetteridge @FediThing by all means say 'i don't like this technology' but don't conflate that with 'therefore it is illegal'
in reply to Cory Doctorow

@pluralistic @ianbetteridge @FediThing Well apart from Anthropic having to pay $1.5B for copyright infringement, it’s all above board, 🙄. It’s not a matter of liking the technology or not, machine learning is capable of cool and useful things. However, how LLMs are being used and pushed is both immoral and culturally destructive. I’m surprised you are buying into it.
in reply to Cory Doctorow

@pluralistic
> untrue that merely because a model's output can infringe copyright that the model is therefore illegal.

Mhmmm naaah overfitting and memorization are very much a thing, especially in the case of LLM where they've completely given up on controlling data leaks, and where memorization has been demonstrated rather unambiguously e.g. with the suitesparse example...

Not to imply that "illegal" is bad ofc, or that copyright justifiable

@bjn @ianbetteridge @FediThing @tante

in reply to Cory Doctorow

@pluralistic @bjn @ianbetteridge @FediThing
I’d argue that It’s a bit more nuanced. Training and inference are two separate stages with their own rules.

For non profit, academic research, excerpts are allowed to be collected, but not the whole work. You still can’t circumvent DRM either.

Llama might be argued is non profit, but lifting whole works to train on still isn’t allowed.

in reply to tante

If you link to an academic paper as support for your argument, I will download that academic paper. This is simply nature taking its course.
in reply to tante

"Artifacts and technologies have certain logics built into their structure that do require certain arrangements around them or that bring forward certain arrangements… Understanding this you cannot take any technology and 'make it good.'"
in reply to ⁂ L. Rhodes

I'd actually take this a step further and say that technologies ARE social arrangements.
in reply to ⁂ L. Rhodes

@lrhodes I agree, I believe that we do encode our values into our technology. Particularly with what we code and what we use to code or write.
in reply to Esther Payne

@onepict Yeah, code is a pretty literal manifestation of that principle, right?

And one of the major advantages of AI from an ideological point of view is that it allows the provider to write their values into *other people's code*.