Skip to main content


AI's ability to make - or assist with - important decisions is fraught: on one hand, AI can *often* classify things very well, at speed and scale that outstrips the ability of any reasonably resourced group of humans. On the other, AI is sometimes *very* wrong, in ways that can be terribly harmful.

-

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/10/30/a-neck-in-a-noose/#is-also-a-human-in-the-loop

1/

Lisa Melton reshared this.

in reply to Cory Doctorow

I'm reminded of an old business communication canard: "Since a computer can never be held accountable for a mistake, a computer can never make a decision."
in reply to Dave Neary

@dneary
Someone will suggest putting up a court of computers and jury composed of computers to judge computers.
in reply to Locksmith

@locksmithprime @dneary that's too close to what a GAN (generative adversarial network) already is, for my tastes...
in reply to Locksmith

@locksmithprime @dneary Once, out of the kindness of my stupid heart, I went against my policy of never contributing to bullshit tech companies and I wrote a nice review for a hotel I really liked in Dublin. Then Google rejected it (which I don't think I ever considered even a possibility) because it (i.e., part of its "AI") deemed the review to be "fake engagement" so... Yeah, this is the end.
in reply to Red

@resl @locksmithprime @dneary I did this on a similar site and got a similar outcome (my first review was "not recommended" and wouldn't show up on the default view). I looked at the other "not recommended" reviews and it looked to me like the site viewed all new users with a lot of suspicion, which honestly is pretty sensible in this context.
in reply to Dave Neary

@dneary
"Machines can do the work so that people have time to think"

"Machines should do the work that's what they're best at, people should do the thinking that's what they're best at"

I first heard this as a sample on a mashup album but I tracked it down to this short film Henson made for IBM in 1968
https://www.youtube.com/watch?v=_IZw2CoYztk

in reply to Cory Doctorow

I think you're going to need to coin a new term because enshitification won't cover this clusterfuck.
in reply to n8chz

@n8chz

Until Doctorow comes up with something better, maybe I'll just rearrange the words from "human-in-the-loop" to "inhuman-loop".

@pluralistic

in reply to Captain Superfluous

@CptSuperlative @n8chz Just remember that a guy with a noose around his neck is technically a "human in a loop."
in reply to Cory Doctorow

@CptSuperlative @n8chz ... and probably feeling the same amount of stress to make a wrong decision.
in reply to Cory Doctorow

https://mstdn.social/@RickGaehl/113395484106914085
in reply to Cory Doctorow

Content warning: Long thread/2

in reply to Cory Doctorow

Content warning: Long thread/3

This entry was edited (1 day ago)
in reply to Cory Doctorow

Content warning: Long thread/4

in reply to Cory Doctorow

Content warning: Long thread/5

in reply to Cory Doctorow

Content warning: Long thread/6

in reply to Cory Doctorow

Content warning: Long thread/7

in reply to Cory Doctorow

Content warning: Long thread/8

in reply to Cory Doctorow

Content warning: Long thread/9

in reply to Cory Doctorow

Content warning: Long thread/10

in reply to Cory Doctorow

Content warning: Long thread/11

in reply to Cory Doctorow

Content warning: Long thread/12

in reply to Cory Doctorow

Content warning: Long thread/13

in reply to Cory Doctorow

Content warning: Long thread/14

in reply to Cory Doctorow

Content warning: Long thread/15

in reply to Cory Doctorow

Content warning: Long thread/16

in reply to Cory Doctorow

Content warning: Long thread/17

in reply to Cory Doctorow

Content warning: Long thread/18

in reply to Cory Doctorow

Content warning: Long thread/19

in reply to Cory Doctorow

Content warning: Long thread/20

in reply to Cory Doctorow

Content warning: Long thread/21

in reply to Cory Doctorow

Content warning: Long thread/22

in reply to Cory Doctorow

Content warning: Long thread/23

in reply to Cory Doctorow

Content warning: Long thread/24

in reply to Cory Doctorow

As a #UX designer, instead of focusing on the specific LLM model, I'm fascinated by *our* models of the situation. The TechBoi crowd continually reduces nuanced human decisions into a simplistic procrustean bed.

We've seen this throughout tech's history, from early language translation software to 'smart' hand soap dispensers that don't recognize black hands. Tech's vision is nearly always myopic. We'll get there, but only after 100s of bad assumptions.

#ux
This entry was edited (1 day ago)

Cory Doctorow reshared this.

in reply to Spooky McBoneyface

Content warning: Long thread/eof

in reply to David LaFontaine

Content warning: Long thread/eof

in reply to David LaFontaine

Content warning: Long thread/eof

in reply to Cory Doctorow

I get that most people are not used to statistical thinking and don't like the fact that no algorithm--no process, actually--is going to be 100% correct. but the very enshittification that you have documented means that there is an amount of toxicity, misinformation, and disinformation online that can *only* be tackled using machine learning tools. As a moderator I'm happy to have a model screen content for me and suggest things for review, and also OK with overriding it.
in reply to Matthew Maybe

@matthewmaybe Did you follow any of the cited, replicated, peer-reviewed research that documents the problems with this approach, and the fact that experts consistently overrate their own ability to override an algorithmic judgment for both accuracy and fairness?

it's possible that you're the expert who is immune to this well-documented effect, but it seems likely that every experimental subject in the data believed this about themselves, and were demonstrably incorrect in that belief.

in reply to Cory Doctorow

I don't doubt the research nor believe that I'm exempt from it, though I do ignore machine-generated reports more often than not. the issue is the sheer amount of online content that moderators and the public are expected to examine critically, especially when synthetic media is rampant. some of it can be spotted by humans with a glance at the fingers or backgrounds in a photo, or a keyword search for GPT boilerplate. but it's unrealistic to think that will work forever.