Skip to main content


AI's ability to make - or assist with - important decisions is fraught: on one hand, AI can *often* classify things very well, at speed and scale that outstrips the ability of any reasonably resourced group of humans. On the other, AI is sometimes *very* wrong, in ways that can be terribly harmful.

-

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2024/10/30/a-n…

1/

Lisa Melton reshared this.

in reply to Cory Doctorow

I'm reminded of an old business communication canard: "Since a computer can never be held accountable for a mistake, a computer can never make a decision."
in reply to Dave Neary

@dneary
Someone will suggest putting up a court of computers and jury composed of computers to judge computers.
in reply to Locksmith

@locksmithprime @dneary that's too close to what a GAN (generative adversarial network) already is, for my tastes...
in reply to Locksmith

@locksmithprime @dneary Once, out of the kindness of my stupid heart, I went against my policy of never contributing to bullshit tech companies and I wrote a nice review for a hotel I really liked in Dublin. Then Google rejected it (which I don't think I ever considered even a possibility) because it (i.e., part of its "AI") deemed the review to be "fake engagement" so... Yeah, this is the end.
in reply to Red

@resl @locksmithprime @dneary I did this on a similar site and got a similar outcome (my first review was "not recommended" and wouldn't show up on the default view). I looked at the other "not recommended" reviews and it looked to me like the site viewed all new users with a lot of suspicion, which honestly is pretty sensible in this context.
in reply to Dave Neary

@dneary
"Machines can do the work so that people have time to think"

"Machines should do the work that's what they're best at, people should do the thinking that's what they're best at"

I first heard this as a sample on a mashup album but I tracked it down to this short film Henson made for IBM in 1968
youtube.com/watch?v=_IZw2CoYzt…

in reply to Cory Doctorow

mstdn.social/@RickGaehl/113395…
Unknown parent

n8chz
@CptSuperlative It's an example of what happens when the enshittification hits the fan.
in reply to Cory Doctorow

Long thread/2

Sensitive content

in reply to Cory Doctorow

Long thread/3

Sensitive content

This entry was edited (3 weeks ago)
in reply to Cory Doctorow

Long thread/4

Sensitive content

in reply to Cory Doctorow

Long thread/5

Sensitive content

in reply to Cory Doctorow

Long thread/6

Sensitive content

in reply to Cory Doctorow

Long thread/7

Sensitive content

in reply to Cory Doctorow

Long thread/8

Sensitive content

in reply to Cory Doctorow

Long thread/9

Sensitive content

in reply to Cory Doctorow

Long thread/10

Sensitive content

in reply to Cory Doctorow

Long thread/11

Sensitive content

in reply to Cory Doctorow

Long thread/12

Sensitive content

in reply to Cory Doctorow

Long thread/13

Sensitive content

in reply to Cory Doctorow

Long thread/14

Sensitive content

in reply to Cory Doctorow

Long thread/15

Sensitive content

in reply to Cory Doctorow

Long thread/16

Sensitive content

in reply to Cory Doctorow

Long thread/17

Sensitive content

in reply to Cory Doctorow

Long thread/18

Sensitive content

in reply to Cory Doctorow

Long thread/19

Sensitive content

in reply to Cory Doctorow

Long thread/20

Sensitive content

in reply to Cory Doctorow

Long thread/21

Sensitive content

in reply to Cory Doctorow

Long thread/22

Sensitive content

in reply to Cory Doctorow

Long thread/23

Sensitive content

in reply to Cory Doctorow

Long thread/24

Sensitive content

in reply to Cory Doctorow

Sensitive content

#ux
This entry was edited (3 weeks ago)

Cory Doctorow reshared this.

in reply to Scott Jenson

Long thread/eof

Sensitive content

in reply to David LaFontaine

Long thread/eof

Sensitive content

in reply to David LaFontaine

Long thread/eof

Sensitive content

in reply to Cory Doctorow

I get that most people are not used to statistical thinking and don't like the fact that no algorithm--no process, actually--is going to be 100% correct. but the very enshittification that you have documented means that there is an amount of toxicity, misinformation, and disinformation online that can *only* be tackled using machine learning tools. As a moderator I'm happy to have a model screen content for me and suggest things for review, and also OK with overriding it.
in reply to Matthew Maybe

@matthewmaybe Did you follow any of the cited, replicated, peer-reviewed research that documents the problems with this approach, and the fact that experts consistently overrate their own ability to override an algorithmic judgment for both accuracy and fairness?

it's possible that you're the expert who is immune to this well-documented effect, but it seems likely that every experimental subject in the data believed this about themselves, and were demonstrably incorrect in that belief.

in reply to Cory Doctorow

I don't doubt the research nor believe that I'm exempt from it, though I do ignore machine-generated reports more often than not. the issue is the sheer amount of online content that moderators and the public are expected to examine critically, especially when synthetic media is rampant. some of it can be spotted by humans with a glance at the fingers or backgrounds in a photo, or a keyword search for GPT boilerplate. but it's unrealistic to think that will work forever.
Unknown parent

Cory Doctorow
@CptSuperlative @n8chz Just remember that a guy with a noose around his neck is technically a "human in a loop."
in reply to n8chz

@n8chz

Until Doctorow comes up with something better, maybe I'll just rearrange the words from "human-in-the-loop" to "inhuman-loop".

@pluralistic