so 3 courts + US Copyright Office say you cannot copyright nor patent anything made primarily with LLMs because automata aren't human.
#SCOTUS won't review these rules because copyright is meant to protect human creations, not software or automata.
this may mean #AWSlop #Microslop are “de-copyrighting” & “de-patenting” their own proprietary software as they let automata “code” 🧐
❝ AI-generated art can’t be copyrighted after Supreme Court declines to review the rule
theverge.com/policy/887678/sup…
AI-generated art can’t be copyrighted after Supreme Court declines to review the rule
The US Supreme Court has declined to hear a case over whether AI-generated art can be copyrighted.Emma Roth (The Verge)
craignicol reshared this.

El Duvelle
in reply to your auntifa liza 🇵🇷 🦛 🦦 • • •your auntifa liza 🇵🇷 🦛 🦦
in reply to El Duvelle • • •Dave Rahardja
in reply to El Duvelle • • •@elduvelle When you copyright a book, you’re not copyrighting the output of your typewriter; you’re copyrighting your work.
The AI program can be copyrighted. Its output can’t.
It’s pretty consistent.
El Duvelle
in reply to Dave Rahardja • • •Hmmm.. not sure.. but this made me think more about it: say, the typewriter is actually changing the inputted letters a bit, for example it's changing some of the Ts into Ss and maybe the author notices it and likes the output, or not, but in any case they want to copyright the resulting book (with the "typos"). That would be valid, right?
Now, isn't the output of an LLM a combination of its inputs (prompt) and its internal machinery (transforming the inputs)? So why can't the output be copyrighted?
Edit: we should probably also consider the training set as part of the inputs, but I still don't think the output can't be copyrighted. However, who would benefit from the copyright is a good question, probably all the authors of the work that went into the training set + the person who wrote the code of the LLM + the person who wrote the prompt..
Dave Rahardja
in reply to El Duvelle • • •EDIT: As @LeslieBurns says below, this is INCORRECT.
I’m not a lawyer. But intuitively, as the SCOTUS implies, copyright protects the work of humans. When writing a prompt to generate art, a machine is performing the vast majority of the transformation from the billions of works it ingested, not the human. Granted, *how much* human work needs to happen for something to be “transformative” (and thus grant the person a copyright) has been a subject of debate for decades, but generative AI is nowhere close to that threshold IMO.
El Duvelle
in reply to Dave Rahardja • • •@drahardja
I agree to some extent, and I'm also not a lawyer, but instead of saying that the output of a LLM can't be copyrighted, I think it would mean that the question is who should benefit from the copyright (or patent). Certainly not just the person who entered the prompt. Instead it would be more like a group work: all of those who contributed to any of the LLM's inputs: all the authors of the stolen work + the person who programmed the LLM + the person who prompted the LLM. The machine itself is not doing any work - just following instructions, like my typewriter, but in a more complex manner.
(Edited my previous post to add this)
It's definitely interesting to think about it!
Jay
in reply to El Duvelle • • •El Duvelle
in reply to Jay • • •@jaystephens
Right.. But a typewriter wouldn't do anything on its own, just like a LLM wouldn't do anything on its own, without a human telling it what to do. Both need input from the human and they transform this input into something else. The difference is the LLM got some preprogrammed input (indeed, some of it part of its training set which is a mash up from actual people's novels, etc.) as well as the current input, provided by the human prompt.
The LLM is not anything like an independent entity creating anything.. it's just some code doing what it's programmed to do
Jay
in reply to El Duvelle • • •The output cannot be considered only the result of the prompt, which was the only work done by the user.
El Duvelle
in reply to Jay • • •@jaystephens
Definitely, see my other answer here
neuromatch.social/@elduvelle/1…
In the end I'd say the question is "who should benefit from the copyright", not whether the LLM's output is copyrightable or not, because I don't see why it wouldn't be. Obviously it's not going to be easy to figure it out, but in theory all those who contributed to the output (including in the training set) should be considered as contributors. The LLM itself, like a typewriter, is not a contributor.
@jaystephens
Definitely, see my other answer here
neuromatch.social/@elduvelle/1…
In the end I'd say the question is "who should benefit from the copyright", not whether the LLM's output is copyrightable or not, because I don't see why it wouldn't be. Obviously it's not going to be easy to figure it out, but in theory all those who contributed to the output (including in the training set) should be considered as contributors. The LLM itself, like a typewriter, is not a contributor.
El Duvelle
2026-03-02 21:48:05
Jay
in reply to El Duvelle • • •It rather raises the question of to what extent the intended purpose of commercial LLMs as they actually exist is to obfuscate things precisely so that any outcome like that is unachievable.
El Duvelle reshared this.
Pete Alex Harris🦡🕸️🌲/∞🪐∫
in reply to El Duvelle • • •@elduvelle @jaystephens
Your continuing not to see why LLM output can't be copyrightable is neither here nor there. It can't. The part written by the human is the prompt itself. You could copyright that, sure. It just isn't useful.
If you could get a court to agree copyright went to all human contributors of the training data, then *nobody* could benefit from it, as nobody would have a right to make copies of it without *all* the contributors or their estates granting a license.
El Duvelle
in reply to Pete Alex Harris🦡🕸️🌲/∞🪐∫ • • •@petealexharris yeah, obviously the fact that the LLM's output comes from untraceable and sometimes stolen data is a problem.
My main point is that the SCOTUS considering that the output of an LLM is somehow the "creation" of software, instead of considering it the creation of a group of humans, is silly and wrong. It's as if they fell in the trap of considering as a separate entity as if it was some kind of actual artificial intelligence.. which it really is not.
Software doesn't "create" anything, and the output of a software like photoshop is not different from the output of software like a LLM, it's still created by humans in the first place. The only difference is that we can't easily track the origin of the LLM's output.
@jaystephens
Pete Alex Harris🦡🕸️🌲/∞🪐∫
in reply to El Duvelle • • •If you can't track from the creative input of the human to the output, there's no provenance to attach ownership to. If you can identify that it contains unlicensed copyrightable material then it's infringing. Obviously you can't assert copyright on someone else's work, and if it's a mix, nobody can. The courts know it's a mess, and I suspect are refusing to make it worse.
Ann T. Phở
in reply to El Duvelle • • •El Duvelle
in reply to Ann T. Phở • • •Ann T. Phở
in reply to El Duvelle • • •El Duvelle
in reply to Ann T. Phở • • •