Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
tante.cc/2026/02/20/acting-eth…
Acting ethically in an imperfect world
Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mod…tante (Smashing Frames)
reshared this

Cory Doctorow
in reply to tante • • •R.L. LE
in reply to Cory Doctorow • • •Cory Doctorow
in reply to R.L. LE • • •@herrLorenz
> Cory shows his libertarian leanings here...
> Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background.
Cory Doctorow
in reply to Cory Doctorow • • •This falls into the "you are entitled to your own opinions, but not your own facts" territory.
R.L. LE
in reply to Cory Doctorow • • •CJPaloma might be hiking
in reply to Cory Doctorow • • •@pluralistic @herrLorenz that second example goes well into overreach territory, and I can see why you'd be not happy with it.
And/but a big part of libertarian appeal is that it does muddy how being "individually free from regulation" can be cast as liberatory. As if individual freedom is all that's needed. "I'm free when there are no regulations" is obviously shallow to lefties, but it (individual freedom) is also a component of why people are lefties, there's real overlap.
Cory Doctorow
in reply to CJPaloma might be hiking • • •@CJPaloma @herrLorenz
There is no virtue in being constrained or regulated per se.
Regulation isn't a good unto itself.
Regulation that is itself good - drawn up for a good purpose, designed to be administrable, and then competently administered - is good.
CJPaloma might be hiking
in reply to Cory Doctorow • • •@pluralistic @herrLorenz Of course! Agreed.
The overlap ends around -when- reasons are "good" enough. Laws about how to treat other people are relatively easy.
But until enough people see rivers on fire, regulations on -doing certain things- aren't imposed, despite many people saying "hey, this isn't good" decades prior.
Not reining in/regulating until after -foreseeable- catastrophes results in all kinds of shit shows (from the MIC, to urban sprawl, to plastics, to tax laws, etc)
Joris Meys
in reply to Cory Doctorow • • •@tante made. He had the same complaint for starters (your argument was heavily drenched in 'you ppl are purists' ), but he also makes the valid argument that technology isn't neutral in itself. Open weights based on intellectual theft and forced labor is still a problem. Until we have a discussion on how the weights come to fruitition, LLM's are objectively problematic from an ethical view. That has nothing to do with purism.
Simon Zerafa (Status: 😊)
in reply to tante • • •That doesn't seem to be the best idea @pluralistic
AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.
That's completely ignoring the environmental and human impacts of the AI bubble.
Try buying DDR memory, a GPU or an SSD / HDD at the moment.
Cory Doctorow
in reply to Simon Zerafa (Status: 😊) • • •@simonzerafa
What is the incremental environmental damage created by running an existing LLM locally on your own laptop?
As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.
kel
in reply to Cory Doctorow • • •@pluralistic
I am astonished that I have to explain this,
but very simply in words even a small child could understand:
using these products *creates further demand*
- surely you know this?
Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.
I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷
Massive loss of respect!
@simonzerafa @tante
Simon Zerafa (Status: 😊)
in reply to Cory Doctorow • • •@pluralistic
Of course, I am speaking in generalities.
Encouraging the use of LLM's is counterproductive in so many ways, as I highlighted.
Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task 🙂
That's power that generated somewhere, even if it's with renewable energy.
The main issue with LLM's is that they don't encourage critical thinking, in a world which is already suffering from a massive shortage.
Cory Doctorow
in reply to Simon Zerafa (Status: 😊) • • •@simonzerafa
As I wrote (and it seems you haven't read what I wrote, which is weird, because that seems like a good first step if you're going to criticize my conduct), I'm running Ollama on a laptop that doesn't even have a GPU.
Its power consumption is comparable to, say, watching a Youtube video.
I know this because my laptop is running free software that lets me accurately monitor its activity, and because the model is also free software.
Cory Doctorow
in reply to Cory Doctorow • • •Checking for punctuation errors is does not discourage critical thinking. It's weird to laud "critical thinking" and also make this claim.
tante
in reply to Cory Doctorow • • •David Huggins-Daines
in reply to tante • • •@pluralistic @simonzerafa I agree in principle with Cory, but I really wish that he had clarified that:
1. Ollama is not an LLM, it's a server for various models, of varying degrees of openness.
2. Open weights is not open source, the model is still a black box. We should support projects like OLMO, which are completely open, down to the training data set and checkpoints.
3. It's quite difficult to "seize that technology" without using Someone Else's Computer to do so (a.k.a clown/cloud)
David Huggins-Daines
in reply to David Huggins-Daines • • •@pluralistic @simonzerafa But ALSO: using a multi-billion-parameter synthetic text extruding machine to find spelling and syntax errors is a blatant example of "doing everything the least efficient way possible" and that's why we are living on an overheating planet buried under toxic e-waste.
If I think about it harder I could probably come up with a more clever metaphor than killing a mosquito with a flamethrower, but you get the idea.
Cory Doctorow
in reply to David Huggins-Daines • • •@dhd6 @simonzerafa
No. It's like killing a mosquito with a bug zapper whose history includes thousands of years of metallurgy, hundreds of years of electrical engineering, and decades of plastics manufacture.
There is literally no contemporary manufactured good that doesn't sit atop a vast mountain of extraneous (to that purpose) labor, energy expenditure and capital.
David Huggins-Daines
in reply to Cory Doctorow • • •@pluralistic @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.
That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.
Rube Goldberg is spinning in his grave!
Cory Doctorow
in reply to David Huggins-Daines • • •@dhd6 @simonzerafa
Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?
The nature of general purpose technologies is that they will be used for lots of purposes.
David Huggins-Daines
in reply to Cory Doctorow • • •@pluralistic @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.
Am I an old man yelling at a cloud?
No, it's the children who are wrong!
Cory Doctorow
in reply to David Huggins-Daines • • •@dhd6 @simonzerafa
Rockets were literally perfected in Nazi slave labor camps.
elle
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa what a shit take dude. rockets being perfected by nazis, project paperclip, and now a neonazi in charge of one of the largest space tech programs on the planet, along with a bullshit generating LLM.
so yeah, maybe this is all fash tech, and maybe taking a stand of "I'm not touching that shit with a thousand-meter pole" is not "neoliberal purity culture". and ollama of all things? the shit pumped out by fucking Meta? are you shitting me?
Cory Doctorow
in reply to elle • • •@elle @dhd6 @simonzerafa
"You used the wrong open model because I don't like the company that made it" is the actual definition of nonsense purity culture.
elle
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa you wrote a book on how much of a shitbag company corpos like Meta are. now you're saying "oh it's not that bad, look it's marginally better than Google Docs spell checker"?! did someone hack your fucking account?
there are legitimately open models that originate from academic institutions, train on open data with full consent. even those models take tens-of-thousands of euros to train. well outside the resources available to most open-source enjoyers
Papageier
in reply to Cory Doctorow • • •@pluralistic @elle @dhd6 @simonzerafa I beg to differ. Demand is a powerful and legit tool of people responding to corporate behavior. Choosing a different product because you dislike a maker's conduct is nought but the invisible hand of the market slapping that maker for their conduct. Provenience does matter.
Smearing choice of provenience as wilful purism would be the perfect argument for any company to disregard social or ethical standards, constituting a right to demand for the 'best' offer. Which would take us into the field of classic objectivism, and be in itself as willful and naive as the purism it accuses consumers of.
Jared White (ResistanceNet ✊)
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa Good grief, these ad hoc rationalizations are absurd and you know it.
FYI, rockets are enormously environmentally destructive (fuel, pollution, noise, etc.). The planet would be better off with as few rockets launching as possible.
Saying an LLM is OK because some completely other "good" technology was invented by evil people is a *non argument*.
Cory Doctorow
in reply to Jared White (ResistanceNet ✊) • • •@jaredwhite @dhd6 @simonzerafa You're right, that would be a silly thing to say.
Good thing I didn't say it.
Drew Crecente (they/them)
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa
The patent system is designed around an acknowledgement that we invent while "standing on the shoulders" of those who have gone before -- this premise is built into the patent system; policy decisions support this approach.
Patent grants exclusive rights to the inventor for a limited time. In exchange, the.inventor must disclose how a person versed in the arts can replicate that invention.
The same is not true of literature / copyright. Different animal, different approach.
Jens Finkhäuser
in reply to David Huggins-Daines • • •@dhd6 @pluralistic @simonzerafa IMHO this is already going down the wrong path.
If you follow anything I write or boost, you'll quickly note that I'm very vocal against AI. But that is a shorthand; my actual position is that I'm fine with the *tech*, strongly dislike the *waste* (where applicable), but my actual complaint is that the AI bubble is literally a fascist project.
Outside of FOMO, every reason people use or promote AI based things in this bubble is designed to...
Jens Finkhäuser
in reply to Jens Finkhäuser • • •@dhd6 @pluralistic @simonzerafa ... disenfranchise people, by partially replacing them with a machine that imitates their work. And unlike people, machines can be owned.
Their output functions like a natural resource (except it's not natural), and there is insurmountable historic precedent that this promotes tyrannies. The TL;DR of it being that when you can mine natural resources, you are less reliant on a fed, educated, healthy, mobile population - so public spending becomes a waste.
Jens Finkhäuser
in reply to Jens Finkhäuser • • •@dhd6 @pluralistic @simonzerafa The problem isn't ingesting text from the web. The problem isn't using this to generate new text, or spell check existing text.
The problem is that capitalist logic demands that this is used to move "value" from the general population to property oligarchs own. Marx would have started talking about labour here.
That this promotes fascism is certainly the effect, and when you look at those who stand to win, probably also the reason.
Jens Finkhäuser
in reply to Jens Finkhäuser • • •@dhd6 @simonzerafa So, yes, whether something is or isn't open plays into that, and I get the complaint.
But at the same time, it's a distraction.
The general position @pluralistic holds in the blog post is very much in line with distinguishing between the tech and the bubble.
Personally, I feel like responding to that with "yeah, but it's not good enough" is a very good example of the kind of Leftist purity culture that is so, so effective at hindering collaboration.
Ray McCarthy
in reply to Cory Doctorow • • •But Google Docs anything is rubbish.
Cory Doctorow
in reply to Ray McCarthy • • •I see. And do you have moral opinions about whether people should use Google Docs? Do you seek out strangers to tell them that it's dangerous to use Google Docs?
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
"What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"
I dunno. But how about a couple of million people?
The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.
Let's all do what Cory does!
☠️
Meanwhile:
technologyreview.com/2025/05/2…
#doomed #ClimateChange
Cory Doctorow
in reply to Kid Mania • • •Which "couple million people" suffer harm when I run a model on my laptop?
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
Missed the point, sir.
When one person does it...no big deal.
When a couple of million people do it...well, see the MIT article above.
Kid Mania
in reply to Kid Mania • • •Subhead quote from the article:
"The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
You are laboring under a misapprehension.
I will reiterate my question, with all caps for emphasis.
Which "couple million people" suffer harm when I run a model ON MY LAPTOP?
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
I'll reiterate my response.
When you *alone* do it...no big deal.
When a couple of million do it ON THEIR OWN LAPTOPS...problem.
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
OK, sorry, i was under the impression that I was having a discussion with someone who understands this issue.
You are completely, empirically, technically wrong.
Checking the punctuation on a document on your laptop uses less electricity than watching a Youtube video.
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
Fair enough, Cory. You're gonna do what you want regardless of my accuracy or inaccuracy anyway. And maybe I've misunderstood this. The same way many many will.
But visualize this:
"Hey...I just read Cory Doctrow uses an LLM to check his writing."
"Really?"
"Yeah, it's true."
"Cool, maybe what I've read about ChatGPT is wrong too..."
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
This is an absurd argument.
"I just read about a thing that is fine, but I wasn't paying close attention, so maybe something bad is good?"
Come.
On.
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
Maybe...
Maybe not.
You have a good day.
Kid Mania
in reply to Kid Mania • • •But hey, you do you, Cory.
I'm nobody...your Cory Doctrow.
Let's all do what Cory does...
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
Well, you could "do what Cory does" by familiarizing yourself with the conduct that you are criticizing before engaging in ad hominem.
To be fair, that's not unique to me, but people who fail to rise to that standard are doing themselves and others no good.
twifkak
in reply to Cory Doctorow • • •Cory Doctorow
in reply to twifkak • • •@twifkak @simonzerafa
parsing a doc uses as much juice as streaming a Youtube video and less juice than performing a gnarly transform on a hi-rez in the Gimp.
I measured.
Ray McCarthy
in reply to Simon Zerafa (Status: 😊) • • •At best 40% junk, but unless you are so expert you don't need it, you can't know which is plausible rubbish.
Would you play Russian Roulette every day for hours?
Cory Doctorow
in reply to Ray McCarthy • • •Again, what does checking the punctuation on a single essay per day have to do with "play[ing] Russian Roulette every day for hours?"
FediThing
in reply to tante • • •I really like and admire @pluralistic and have utmost respect for him, and that's why I'm totally baffled about why he is claiming "fruit of the poisoned tree" arguments as cause of LLM scepticism.
The objections to LLMs aren't about origins but about what they they are doing right now: destroying the planet, stealing labour, giving power over knowledge to LLM owners etc.
The objections are nothing to do with LLMs' origins, they're entirely about LLMs' effects in the here and now.
Cory Doctorow
in reply to FediThing • • •Which parts of running a model on your own laptop are implicated in "destroying the planet?" How is checking punctuation "stealing labor?" Or, for that matter "giving power over knowledge to LLM owners?"
FediThing
in reply to Cory Doctorow • • •(Hello Mr Doctorow! Just want to make clear I admire you a great deal and this isn't intended as an attack on you!)
Running a local LLM with no connection to outside providers might be a way of avoiding bad stuff, but I am not clear on how this relates to discussing origins of technologies?
It seems like there's ambiguity in your post about whether it applies just to people with homelabs wondering if they should try offline LLMs, or whether you are discussing LLMs as a general technology?
Almost everyone using LLMs will use the online kind, so objections to LLMs are (reasonably IMHO) based on that scenario.
Cory Doctorow
in reply to FediThing • • •@FediThing
> I am not clear on how this connects to discussing origins of technologies
Because the arguments against running an LLM on your own computer boil down to, "The LLM was made by bad people, or in bad ways."
This is a purity culture standard, a "fruit of the poisoned tree" argument, and while it is often dressed up in objectivity ("I don't use the fruit of the poisoned tree"), it is just special pleading ("the fruits of the poisoned tree that I use don't count, because __").
Cory Doctorow
in reply to Cory Doctorow • • •@FediThing
> Almost everyone using LLMs will use the online kind, so objections to LLMs are (reasonably IMHO) based on that scenario.
Except that in this specific instance, you are weighing on an article that claims that it is wrong to run a local LLM for the purposes of checking for punctuation errors.
FediThing
in reply to Cory Doctorow • • •Thank you for the responses 🙏
"Because the arguments against running an LLM on your own computer"
...ahhh okay. So was this post aimed more at a very narrow homelab kind of audience?
It's just, as a reader, the article's emphasis on examples of tech origins imply it's trying to defend LLMs in general? This probably is my ignorance as a reader, but it's how it came across to me, and led to bafflement.
Cory Doctorow
in reply to FediThing • • •@FediThing This is the use-case that is under discussion.
pluralistic.net/2026/02/19/now…
Pluralistic: Six Years of Pluralistic (19 Feb 2026) – Pluralistic: Daily links from Cory Doctorow
pluralistic.netFediThing
in reply to Cory Doctorow • • •Thanks. Can totally see how that makes sense at a technical level for people who run their own offline services.
I think it's the ambiguity that is driving the discourse over this post. People are taking the "refusing to use a technology" section as a defence of LLMs in general?
If the angle was caging LLMs or something like that, it might make it clearer that you aren't endorsing the most common form of LLM?
Anyway, it's your call on this as author, just wanted to feed back on this because your writing matters and I hope feedback is helpful to it.
Cory Doctorow
in reply to FediThing • • •Shiri Bailem
in reply to FediThing • • •@FediThing I think the problem in discourse is the overwhelming amount of people experience anti-AI rage.
In the topic of LLMs, the two loudest groups by a wide margin are:
1. People who refuse to see any nuance or detail in the topic, who can not be appeased by anything other than the complete and total end of all machine learning technologies
2. AI tech bros who think they're only moments away from awakening their own personal machine god
I like to think I'm in the same camp as @Cory Doctorow , that there's plenty of valid use for the technology and the problems aren't intrinsic to the technology but purely in how it's abused.
But when those two groups dominate the discussions, it means that people can't even conceive that we might be talking about something slightly different than what they're thinking.
Cory in the beginning explicitl
... Show more...@FediThing I think the problem in discourse is the overwhelming amount of people experience anti-AI rage.
In the topic of LLMs, the two loudest groups by a wide margin are:
1. People who refuse to see any nuance or detail in the topic, who can not be appeased by anything other than the complete and total end of all machine learning technologies
2. AI tech bros who think they're only moments away from awakening their own personal machine god
I like to think I'm in the same camp as @Cory Doctorow , that there's plenty of valid use for the technology and the problems aren't intrinsic to the technology but purely in how it's abused.
But when those two groups dominate the discussions, it means that people can't even conceive that we might be talking about something slightly different than what they're thinking.
Cory in the beginning explicitly said they were using a local offline LLM to check their punctuation... and all of this hate you see right here erupted. If you read through the other comment threads, people are barely even reading his responses before lumping more hate on him.
And if someone as great with language as Cory can't put it in a way that won't get this response... I think that says alot.
@tante
FediThing
in reply to Shiri Bailem • • •@shiri
(Untagged Cory as I'm sure he is getting a lot of replies and I don't want to repeat myself at him.)
I don't think it's the first part that caused problems but the later parts, as they didn't explicitly mention offline LLMs and it was possible to read the later text as referring to all LLMs.
Shiri Bailem
in reply to FediThing • • •@FediThing The link in question where he talked about it, and did explicitly say it, though he didn't use the "offline" label specifically he basically described it as such. (The label itself is not purely self explanatory, so wouldn't have helped much)
Here's the article link: pluralistic.net/2026/02/19/now…
On friendica the thumbnail of the page is what I've attached here, incidentally the key paragraph in question.
@tante
FediThing
in reply to Shiri Bailem • • •Yup, that's the start, but then the text goes onto a discussion of very broad technologies and refusal to use them, which is where the ambiguity sort of creeps in. It isn't clear in the later sections if it's referring to LLMs in general, or just the very specific niche of offline LLMs.
I'm not posting this to attack Cory but to give feedback as a reader. I (incorrectly) took him to be talking about LLMs in general in the later section of the post, and it's possible other people are interpreting the later sections in the same way.
prince lucija
in reply to FediThing • • •i feel in the similar way as big tech has taken the notion of AI and LLMs as a cue/excuse to mount a global campaign of public manipulation and massive investments into a speculative project and pumps gazillions$ into it and convinces everyone it's innevitable tech to be put in bag of potato chips, the backlash is then that anything that bears the name of AI and LLM is poisonous plague and people are unfollowing anyone who's touched it in any way or talks about it in any other way than "it's fascist tech, i'm putting a filter in my feed!" (while it IS fascist tech because it's in hands of fascists).
in my view the problem seems not what LLMs are (what kind of tech), but how they are used and what they extract from planet when they are used by the big tech in this monstrous harmful way. of course there's a big blurred line and tech can't be separated from the political, but... AI is not intelligent (Big Tech wants you to believe that), and LLMs are not capable of intelligence and learning (Big Tech wants you to believe that).
so i feel like a big chunk of anger and hate
... Show more...i feel in the similar way as big tech has taken the notion of AI and LLMs as a cue/excuse to mount a global campaign of public manipulation and massive investments into a speculative project and pumps gazillions$ into it and convinces everyone it's innevitable tech to be put in bag of potato chips, the backlash is then that anything that bears the name of AI and LLM is poisonous plague and people are unfollowing anyone who's touched it in any way or talks about it in any other way than "it's fascist tech, i'm putting a filter in my feed!" (while it IS fascist tech because it's in hands of fascists).
in my view the problem seems not what LLMs are (what kind of tech), but how they are used and what they extract from planet when they are used by the big tech in this monstrous harmful way. of course there's a big blurred line and tech can't be separated from the political, but... AI is not intelligent (Big Tech wants you to believe that), and LLMs are not capable of intelligence and learning (Big Tech wants you to believe that).
so i feel like a big chunk of anger and hate should really be directed at techno oligarchs and only partially and much more critically at actual algorithms in play. it's not LLMs that are harming the planet, but rather the extraction, these companies who are absolute evil and are doing whatever the hell they want, unchecked, unregulated.
or as varoufakis said to tim nguyen: "we don't want to get rid of your tech or company (google). we want to socialize your company in order to use it more productively" and, if i may add, safely and beneficialy for everyone not just a few.
bazkie 👩🏼💻 bitplanes 🎵
in reply to prince lucija • • •@prinlu @FediThing @pluralistic I agree with most things said in this thread, but on a very practical level, I'm curious what training data was used for the model used by @pluralistic 's typo-checking ollama?
for me, that training data is key here. was it consensually allowed for use in training?
because as I understand, LLMs need vast amounts of training data, and I'm just not sure how you would get access to such data consensually. would love to be enlightened about this :)
Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •@bazkie @prinlu @FediThing
I do not accept the premise that scraping for training data is unethical (leaving aside questions of overloading others' servers).
This is how every search engine works. It's how computational linguistics works. It's how the Internet Archive works.
Making transient copies of other peoples' work to perform mathematical analysis on them isn't just acceptable, it's an unalloyed good and should be encouraged:
pluralistic.net/2023/09/17/how…
How To Think About Scraping – Pluralistic: Daily links from Cory Doctorow
pluralistic.netFediThing
in reply to Cory Doctorow • • •This would be my take:
Search engines direct people to the work they index. They reward labour by directing people towards it.
Scraping without consent for training data lets people reproduce the work without crediting or rewarding the people who actually did the labour. That seems like labour theft?
If it is labour theft, then it isn't sustainable and that's part of why LLMs are so questionable as a technology.
Cory Doctorow
in reply to FediThing • • •@FediThing @bazkie @prinlu
There are tons of private search engines, indices, and analysis projects that don't direct text to other works.
I could scrape the web for a compilation of "websites no one should visit, ever." That's not "labor theft."
FediThing
in reply to Cory Doctorow • • •@pluralistic @bazkie @prinlu
Indexing works is a totally different thing to creating knock-offs of works, surely?
What Miyazaki said about AI knock-offs surely illustrates the difference?
Cory Doctorow
in reply to FediThing • • •No one is defending "creating knock offs of works." Why would you raise it here? Who has suggested that this is a good way to use LLMs or a good outcome from scraping?
Cory Doctorow
in reply to Cory Doctorow • • •The argument was literally, "It's not OK to check the punctuation in *your own work* if the punctuation checker was created by examining other peoples' work, because performing mathematical analysis on other peoples' work is *per se* unethical."
Cory Doctorow
in reply to Cory Doctorow • • •By this standard the OED is unethical.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •@bazkie @FediThing @prinlu
You've literally just made the case against:
* Dictionaries
* Encyclopedias
* Bibliographies
And also the entire field of computational linguistics.
If that's your position, fine, we have nothing more to say to one another because I think that's a very, very bad position.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •I did not make that case, if you'd properly read my [additions] to the statement.
making dictionaries etc isn't automated on mass scales like feeding training data to LLMs is.
it's a very human job that involves a lot of expertise and takes a lot of time.
zenkat
in reply to Cory Doctorow • • •@pluralistic @bazkie @FediThing @prinlu I think part of the issue here is that GenAI is being pushed so hard and fast *everywhere* that's it's hard to be nuanced about what narrow use-cases might be acceptable or not.
We're living under a massive pro-LLM propaganda campaign. They have already set the terms of the debate with a maximalist position. It's no surprise that the backlash is similarly absolute.
Joris Meys
in reply to Cory Doctorow • • •@pluralistic
No, because dictionnaries are about language which is a shared common, encyclopedias are about knowledge, which is a shared common, and bibliohraphies are a list of works, not a derivative.
Knowledge, language and a list of works cannot be copyrighted. You can use language, knowledge, words from the dictionary. You can quote an encyclopedia when refering to the source. None of that is even relevant to this discussion.
@bazkie @FediThing @prinlu @tante
Zitrone
in reply to Cory Doctorow • • •but all that is kinda offtopic, imo
there are plenty of reasons to not use LLM, besides that commons/copyright stuff, wich are not purist and very much based on real-world-issues. in a perfect world, theft won't be an issue (imo), because we had overcome money and fossil energy. but using genLLM still would be morally wrong, because of its dangers, due to bias and failures. …/
Joris Meys
in reply to Cory Doctorow • • •@pluralistic
The argument was "without the consent of the creators of said works." And you know that.
Don't be just another debate bro. Please.
@FediThing @bazkie @prinlu @tante
FediThing
in reply to Cory Doctorow • • •If LLMs were only used for checking grammar that is one thing.
But by far the most common use of LLMs is labour theft through creating knock-offs, and that's something else.
I think the concern is that training data useful for the first case could be useful for the second case too? Hence the questions about where the training data comes from and where it ends up.
Kind of feels like it needs to be strictly ringfenced if it's to be ethical?
Cory Doctorow
in reply to FediThing • • •Once again, you a replying to a thread that started when someone wrote that using an LLM to check the punctuation in your own work is ethically impermissible because no one should assemble corpora of other peoples' works for analytical purposes under any circumstances, ever.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •FediThing
in reply to Cory Doctorow • • •@pluralistic @bazkie @prinlu
I guess the question is if such data is assembled for a legitimate purpose, are there safeguards to stop the same data being used for an illegitimate purpose?
If there aren't any safeguards, then there's a danger the legitimate purpose is used as a shield/figleaf for illegitimate stuff?
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •@pluralistic @prinlu @FediThing I think the difference to search engines is how LLM reproduces the training data..
as a thought experiment; what if I'd scrape all your blogposts, then start a blog that makes Cory Doctorow styled blogposts, which would end up more popular than your OG blog since I throw billions in marketing money at it.
would you find that ethical? would you find it acceptable?
further thought experiment; lets say you lose most of your income as a result and have to stop making blogs and start flipping burgers at mcDonalds.
your blog would stop existing, and so, my copycat blog would, too - or at least, it would stop bringing novel blogposts.
this kind of effect is real and will very much hinder cultural development, if not grind it to a halt.
that is a problem - this is culturally unsustainable.
Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •First: checking for punctuation errors and other typos *in my own work* in a model running on *my own laptop* has nothing - not one single, solitary thing - in common with your example.
Nothing.
Literally, nothing.
But second: I literally license my work for commercial republication and it is widely republished in commercial outlets without any payment or notice to me.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •but then you consented to that, right? you are in control of that.
also my example IS similar - after all, it's data scraped without consent, used to create another work. the typo-checker changes your blogpost based on my training data, in the same way my copycat blog changes 'my' works based on your training data.
sure, it's on a way different scale - deliberately, to more clearly show the principle - but it's the same thing.
Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •@bazkie
Should we ban the OED?
There is literally no way to study language itself without acquiring vast corpora of existing language, and no one in the history of scholarship has ever obtained permission to construct such a corpus.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •@pluralistic I gave it a good thought, and you know what, I'm gonna argue that yes, for me there is a degree of unethical-ness to that lack of permission!
the things that makes me not mind that so much are a variety of differences in method and scale;
(*btw just explaining my personal reasons here, not arguing yours)
- every word in the OED was painstakingly researched by human experts to make the most possible sense of it
- coming from a place of passion on the end of the linguists, no doubt
- the ownership of said data isn't "techno-feudal mega-corporations existing under a fascist regime"
- the OED didn't spell the end of human culture (heh) like LLMs very much might.
so yeah. I guess we do agree that, on some level, the OED and an LLM have something in similar.
it's the differences in method and scale that make me draw the line somewhere in between them; in a different spot from where you may draw it.
and like @
... Show more...@pluralistic I gave it a good thought, and you know what, I'm gonna argue that yes, for me there is a degree of unethical-ness to that lack of permission!
the things that makes me not mind that so much are a variety of differences in method and scale;
(*btw just explaining my personal reasons here, not arguing yours)
- every word in the OED was painstakingly researched by human experts to make the most possible sense of it
- coming from a place of passion on the end of the linguists, no doubt
- the ownership of said data isn't "techno-feudal mega-corporations existing under a fascist regime"
- the OED didn't spell the end of human culture (heh) like LLMs very much might.
so yeah. I guess we do agree that, on some level, the OED and an LLM have something in similar.
it's the differences in method and scale that make me draw the line somewhere in between them; in a different spot from where you may draw it.
and like @zenkat mentioned elsewhere, it's the whole thing around LLMs that makes me very wary of normalizing anything to do with it, and I concede I wouldn't mind your slightly unethical LLM spellchecker as much, if we didn't live in this horrible context. :)
I guess this has become a bit of a reconciliatory toot. agree to disagree on where we draw the line, to each their own, and all that.
David in Tokyo
in reply to Cory Doctorow • • •@pluralistic @bazkie
Dictionaries reference the sources they use for examples in the entries themselves.
LLMs lose the references at training time.
You've got this dead wrong.
Rafa
in reply to Cory Doctorow • • •@pluralistic After reading so many comments, it is pretty clear who here would be opposing the creation of Napster and torrenting and be defending RIAA... They are also clearly very much against Internet Archive, shadow libraries, etc, simply because they can't take any disagreement.
Who knew running a local LLM, that uses the same energy as watching a youtube video, to spellcheck your own work would bring out such a mob.
zivi
in reply to Cory Doctorow • • •@pluralistic @FediThing you’re attempting to legitimize use of an unethical technology for something you don’t actually need a plausible-sounding-wall-of-text generator for
it goes beyond “it’s made by bad people in bad ways”. it’s a “”tool”” that actively causes cognitive decline and psychosis and sucks the soul out of everything it touches. and mind you promoting and legitimizing it is an act of support for those bad people and their bad ways. your deflection is a typical that of someone with no regard for ethics
“I installed Ollama” instantly gives a person away as a techbro
Cory Doctorow
in reply to zivi • • •@zaire @FediThing
I'm not a liberal, I'm a leftist, so perhaps this is why I disagree with you.
The argument that "something is unethical because someone else used it in an unethical way" is so incoherent that it doesn't even rise to the level of debatability.
Dilman Dila
in reply to Cory Doctorow • • •So this is some kind of spell-checker, which is already in LibreOffice? I'm not sure why I would use that instead.
I use offline AI, esp for visual effects, subtitles, fixing dialogue errors, etc. There are "deep fake technologies" useful for mocap, camera tracking, and such other tedious works. They don't use prompts, and don't generate art, and are trained on your own inputs.
Perhaps we need a new name to differentiate it from the online genAI tech.
Mark Saltveit
in reply to Cory Doctorow • • •What's the difference between your argument here and "Slavery is OK because I didn't kidnap the slaves; I just inherited them from my dad." ??
Cory Doctorow
in reply to Mark Saltveit • • •@taoish @FediThing
Because there are no slaves in this instance. Because no one is being harmed or asked to do any work, or being deprived of anything, or adversely affected in *any articulable way*.
But yeah, in every other regard, this is exactly that enslaving people.
Sure.
Mark Saltveit
in reply to Cory Doctorow • • •@pluralistic @FediThing
Unless you consider stolen intellectual property (and ongoing copyright violations) a harm, a deprivation, &c.
But your general analogy against "fruit of the poison tree" morality would seem to also apply in the case of slavery -- in my hypothetical, the person didn't enslave anyone. They just inherited a slave from someone who did. That is indeed "fruit of a poisoned tree", even if they just continued an existing enslavement.
We have a real world recent example -- the cell lines stolen from Henrietta Lacks. Do you dismiss any moral concerns about using her cell line without consent as a neo-liberal moral purity trap?
Cory Doctorow
in reply to Mark Saltveit • • •Scraping and training are not copyright infringements:theguardian.com/us-news/ng-int…
AI companies will fail. We can salvage something from the wreckage
Cory Doctorow (The Guardian)Nelson
in reply to Cory Doctorow • • •I think you can answer these questions yourself.
Suppose you wore a coat made out of mink fur. The minks are already dead, simply wearing the coat won't kill more minks. What does wearing mink fur have to do with cruelty to minks?
Suppose you live in the time of the Luddites. Legislation prohibits trade unions and collective bargaining. Mill owners introduce machines, reducing wages. But you build your own machine. Problem solved? You helping labor or capital?
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •@skyfaller @FediThing
This is a "fruit of the poisoned tree" argument.
Suppose you use a computer to post to Mastodon, despite the fact that silicon transistors were invented by the eugenicist William Shockley, who spent his Nobel money offering bribes to women of color to be sterlized?
Suppose you sent that Mastodon post on a packet-switched network, despite the fact that this technology was invented by the war criminals at the RAND corporation?
Cory Doctorow
in reply to Cory Doctorow • • •@skyfaller @FediThing
Also, you're wrong about the Luddites, just as a factual matter. The guilds the Luddites sprang from weren't prohibited by law, they were *protected* by law, and the Luddites' cause wasn't about gaining new protections under statute, but rather, enforcing existing statutory protections.
(Also: the Luddites didn't oppose steam looms or stocking frames; their demands were for fair deployment of these)
Nelson
in reply to Cory Doctorow • • •@pluralistic Thank you for the fact check. I was paraphrasing that text from the popular Nib comic: thenib.com/im-a-luddite/
If this contains factual inaccuracies I will need to do more research and perhaps stop sharing that comic.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •Nelson
in reply to Cory Doctorow • • •@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.
For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •No. Literally the same LLM that currently finds punctuation errors will continue to do so. I'm not inventing novel forms of punctuation error that I need an updated LLM to discover.
Nelson
in reply to Cory Doctorow • • •@pluralistic Ok, fair enough, if spell checking is literally the only thing you use LLMs for.
I still think you wouldn't rely on a 1950s dictionary for checking modern language, and language moves faster on the internet, but I'm willing to concede that point.
I still think a deterministic spell checker could have done the job and not put you in this weird position of defending a technology with wide-reaching negative effects. But I guess your post was for just that purpose.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •@skyfaller @FediThing
I'm not using it for spell checking.
Did you read the article that is under discussion?
Nelson
in reply to Cory Doctorow • • •@pluralistic I apologize, I did in fact read the relevant section of your post, and I was using spell-checking as shorthand for all typo checking, because deterministic grammar checkers have also existed for some time, although not as long as spell checkers and perhaps they have not been as reliable. I understand that LLMs can catch some typos that deterministic solutions may not.
I just think we should put more effort into improving deterministic tools instead of giving up.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •Correl Roush
in reply to Nelson • • •This is precisely it; it's about the process, not their distance from Altman, Amodei, et al. (which the Ollama project and those like it achieve).
The LLM models themselves are, per this analogy, still almost entirely of the mink-corpse variety, and I think it's a stretch to scream "purity!" at everyone giving you the stink eye for the coat you're wearing.
It's not impossible to have and use a model, locally hosted and energy-efficient, that wasn't directly birthed by mass theft and human abuse (or training directly off of models that were). And having models that aren't, that are genuinely open, is great! That's how the wickedness gets purged and the underlying tech gets liberated.
Maybe your coat is indeed synthetic, that much is still unclear, because so far all the arguing seems to be focused on the store you got it from and the monsters that operate the worst outlets.
Cory Doctorow
in reply to Correl Roush • • •@correl @skyfaller @FediThing
More fruit of the poisoned tree.
"This isn't bad, but it has bad things in its origin. The things I use *also* have bad things in their origin, but that's OK, because those bad things are different because [reasons]."
This is the inevitable, pointless dead-end of purity culture.
Nelson
in reply to Cory Doctorow • • •@pluralistic This seems like whataboutism. Valid criticisms can come from people who don't behave perfectly, because otherwise no one would be able to criticize anything. Similarly, we can criticize society while participating in it.
The point I'd like to make (that doesn't seem to be landing) is that LLMs aren't just made by bad people, but are also made through harmful processes. Harm dealt mostly during creation can be better than continuing harm, but still harmful.
@correl @FediThing @tante
Nelson
in reply to Nelson • • •@pluralistic @correl @FediThing In the climate crisis we are often concerned about "embodied emissions", things made with fossil fuels that may not use fossil fuels once they're created. If we don't change our fossil fuel using production systems, those embodied emissions could be enough to kill us.
I'd say that the literal and figurative embodied emissions of even local LLMs are sufficient to make them problematic to use. Individuals avoiding them is insufficient but necessary.
Cory Doctorow
in reply to Nelson • • •@skyfaller @correl @FediThing
That is completely backwards.
The entire point of measuring embodied emissions is to *make use of things that embody emissions*.
We improve old, energy inefficient buildings *because they represent embodied emissions* rather than building new, more efficient buildings because the *net* emissions of building a new, better building exceed the emissions associated with a remediated, older building.
Nelson
in reply to Cory Doctorow • • •@pluralistic You're missing my point. Old houses should be used, but if new houses are built using fossil fuels, then we can cook ourselves by building them even if new buildings are fully electrified.
It feels like you're ignoring the context where LLMs are still being created. It's ethically different to use something made by slaves if slavery is not in the past. If you golf on a golf course maintained by prison labor yesterday, it matters that prisoners will clean it again tomorrow.
@correl
Cory Doctorow
in reply to Nelson • • •@skyfaller @correl
I'm not ignoring that context, it is *entirely irrelevant*, because I am *not* using some prospective, as-yet-to-be-trained LLM to check punctuation on my laptop. I am using an *actual, existing* LLM.
So if your argument is, "If you did something that's not the thing you've done, that would be bad," my response is, "Perhaps that's true, but I have no idea why you would seek to a stranger to discuss that subject."
Cory Doctorow
in reply to Nelson • • •@skyfaller @correl @FediThing
Yes, that is just more fruit of the poisoned tree.
This thing harmed people in its creation, therefore the thing is bad, as are all things derived from it.
However, the things *I* use don't count, because the bad things in their history are different because [insert incoherent rationalization].
Correl Roush
in reply to Cory Doctorow • • •While I can understand your argument and almost certain exhaustion at hollow criticism, that response feels very dismissive of the points being made against your application of that argument.
I'm not sure how fruitful of an argument can be had with regard to what you may or not be using, as you really haven't clarified that anyhow besides locally hosted software that could be used to run terrible models, so this whole mess is just an endless back and forth of "You seem to be dodging the nature of the evil you may be accepting" vs "You're over-concerned with purity", and I think that's justifiably leaving a bad taste in everyone's mouth.
Cory Doctorow
in reply to Correl Roush • • •@correl @skyfaller @FediThing
> as you really haven't clarified that anyhow
I'm sorry, this is entirely wrong.
The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.
I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.
Radio Free Trumpistan
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Radio Free Trumpistan • • •@claralistensprechen3rd @skyfaller @FediThing @correl
I don't know what this has to do with someone stating "you haven't clarified" something, when you have.
Also, I have reposted the paragraph in question TWICE this morning.
Correl Roush
in reply to Cory Doctorow • • •Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.
Cory Doctorow
in reply to Correl Roush • • •Scraping work is categorically not "stealing."
Lupino
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Lupino • • •This is a purity culture argument about the "fruit of the poisoned tree." The silicon in your laptop was invented by a eugenicist. The network your packets transit was invented by war criminals. The satellite the signal travels on was launched on a rocket descended from Nazi designs that were built by death-camp slaves.
Cory Doctorow
in reply to Cory Doctorow • • •To be clear, I completely reject this argument as a form of special pleading. Everyone has a reason why *their* fruit of the poisoned tree is OK, but other peoples' fruit of the poisoned tree is immoral.
Lupino
in reply to Cory Doctorow • • •@pluralistic i guess this misses the point: the particular chip in my laptop wasn't made by war criminals (i hope...), but the model you do use was trained under vast amounts of energy and water consumption. I'm not sure this is completely comparable, tbh.
@FediThing @tante
Lupino
in reply to Lupino • • •Cory Doctorow
in reply to Lupino • • •Llama 2 was not built to check spelling and grammar. That's "not even wrong."
Cory Doctorow
in reply to Lupino • • •No, this is just more "fruit of the poisoned tree" and your argument that your fruit of the poisoned tree doesn't count is the normal special pleading that this argument always decays into.
Lupino
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Lupino • • •I never denied the existence of "use-cases that...one can reject it its entirety."
Matija Nalis
in reply to Cory Doctorow • • •The problem with AI is not primarily with tech itself, but human traits of laziness, greed and ignoring unpriced externalities for short-sighted personal gains. Those of course exist even without AI, but AI allows for their damage to be multiplied many thousandfold which upgrades it from minor annoyance to existential crisis.
And the problem with Cory saying he's preferring using his AI is not exact cost of him using AI model (which is tiny), it is NORMALIZING 1/3
Colman Reilly
in reply to FediThing • • •Cory Doctorow
in reply to Colman Reilly • • •Ursa
in reply to Cory Doctorow • • •@pluralistic @Colman @FediThing
This is...disappointing. To be fair, I'm disappointed in almost everyone in this thread for engaging in schoolyard shit throwing, but you're much higher in status and your shit sticks. Have a conversation. Figure out where these views can comingle. Find common understanding or you risk using your high status to fracture an already unstable alliance of people who want technology to operate safely and for the benefit of our shared humanity.
Do better.
komali_2
in reply to Cory Doctorow • • •tante
in reply to Colman Reilly • • •FediThing
in reply to tante • • •Yup. Cory Doctorow has done so much good by turning complex important topics into easy-to-grasp concepts like enshittification, it's made the debate over tech much richer and more widely held.
I'm not attacking him personally or his work.
Deborah Preuss, pcc 🇨🇦
in reply to FediThing • • •@FediThing … Certainly this is true of my reasoning for a #noAI stance. For me it's about climate, economic and social impacts of the ever growing mega-LLMs, and the craze to use them for all kinds of purposes for which they are unfit.
I am much less concerned with a local instance checking a writer's grammar. Lumping those two together makes little sense, to me.
On some other topics, I find @pluralistic's leadership constructive and helpful.
@tante
Ian Betteridge
in reply to FediThing • • •Cory Doctorow
in reply to Ian Betteridge • • •Performing mathematical analysis on large corpora of published work is not "stealing."
Hanno Rein
in reply to Cory Doctorow • • •Ian Betteridge
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Ian Betteridge • • •David
in reply to Cory Doctorow • • •@pluralistic @ianbetteridge @FediThing
It's still profit loss damage curable by income transfer if the illegally acquired data was used to create that profit. Dataset prominence should provide the percentage of profits and prominence is data size but also inference casualty. The primary literature should not be able to be diluted with free intellectual property.
I don't know if any of this is actual case law and I'm not a lawyer.
Cory Doctorow
in reply to David • • •You're talking about ways of using models, not the creation of models. It's possible to make a model that does illegal things. But training a model is not illegal.
James Gleick
in reply to Cory Doctorow • • •@pluralistic @ianbetteridge @FediThing “Mathematical analysis” is doing a lot of work here. It could mean gathering meaningless statistics. Or it could mean capturing the qualities (deviations from the average) that make a particular work of art (or author) special, creative, surprising—for use in simulacra.
I think that's harmful, to the culture as a whole, if not to the artworks and artists getting regurgitated.
Cory Doctorow
in reply to James Gleick • • •@gleick @ianbetteridge @FediThing
Let's stipulate to that (I don't agree, as it happens, but that's OK). It's still not a copyright infringement to enumerate and analyze the elements of a copyrighted work.
For the record, I think AI art is bad and neither consume nor make it.
James Gleick
in reply to Cory Doctorow • • •@pluralistic @ianbetteridge @FediThing I'm not claiming that's copyright infringement. Even if one respects the general framework of copyright, which I know you don’t, it seems hopeless to apply it to this AI mess.
But there is a kind of theft here. Not that it's actionable or measurable. But it’s nontrivial. It's related to questions of impersonation. It's an assault on individuality. Whatever your reasons for thinking AI art is bad (I have some sense), it's related to that, too.
Alaric Snell-Pym
in reply to Cory Doctorow • • •Bruno Nicoletti
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Bruno Nicoletti • • •@bjn @ianbetteridge @FediThing
Once again, you're talking about *using* a model, not training a model.
Also "IP theft" isn't a thing. Perhaps you mean copyright infringement?
Bruno Nicoletti
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Bruno Nicoletti • • •@bjn @ianbetteridge @FediThing it is a bedrock of copyright law that devices 'capable of sustaining a substantial non-infringing use' are lawful. Decided in 1984 (SCOTUS/Betamax) and repeatedly upheld.
It is categorically untrue that merely because a model's output can infringe copyright that the model is therefore illegal.
There's not much that's truly settled in American limitations and exceptions, but this is.
Cory Doctorow
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Cory Doctorow • • •Bruno Nicoletti
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Bruno Nicoletti • • •Bruno Nicoletti
in reply to Cory Doctorow • • •Else, Someone
in reply to Cory Doctorow • • •@pluralistic
> IPREG affirming that training a model doesn't infringe.
What we now take the party line serious?
@bjn @ianbetteridge @FediThing @tante
Else, Someone
in reply to Cory Doctorow • • •@pluralistic
> untrue that merely because a model's output can infringe copyright that the model is therefore illegal.
Mhmmm naaah overfitting and memorization are very much a thing, especially in the case of LLM where they've completely given up on controlling data leaks, and where memorization has been demonstrated rather unambiguously e.g. with the suitesparse example...
Not to imply that "illegal" is bad ofc, or that copyright justifiable
@bjn @ianbetteridge @FediThing @tante
The Secretbatcave
in reply to Cory Doctorow • • •@pluralistic @bjn @ianbetteridge @FediThing
I’d argue that It’s a bit more nuanced. Training and inference are two separate stages with their own rules.
For non profit, academic research, excerpts are allowed to be collected, but not the whole work. You still can’t circumvent DRM either.
Llama might be argued is non profit, but lifting whole works to train on still isn’t allowed.
soc
in reply to Cory Doctorow • • •⁂ L. Rhodes
in reply to tante • • •⁂ L. Rhodes
in reply to tante • • •⁂ L. Rhodes
in reply to ⁂ L. Rhodes • • •Esther Payne
in reply to ⁂ L. Rhodes • • •⁂ L. Rhodes
in reply to Esther Payne • • •@onepict Yeah, code is a pretty literal manifestation of that principle, right?
And one of the major advantages of AI from an ideological point of view is that it allows the provider to write their values into *other people's code*.