I think this recent post by AI industry CEO Matt Shumer is worth a read. In it, he basically explains how quickly LLMs (large language models) are evolving to supplant many developers and programmers, and how that disruption is coming to other industries quickly. He also warns critics of AI to adjust their priors and realize the AI tools you mocked just six months ago, aren’t the ones in use today:
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”
While the post is interesting (with the understanding this is somebody making and selling automation software), you might notice something: absolutely nowhere in the blog post does he meaningfully acknowledge the widespread problems with existing AI use. Either because his financial self-interest doesn’t allow for honest acknowledgment of them, or because he simply doesn’t find those aspects all that interesting.
Maybe both.
XLE
in reply to Powderhorn • • •There are so many surveillance-obsessed AI CEOs that this one wasn't in my top two guesses. Excellent reminder, Techdirt.
The amount of attention the pro-business doom narrative gets, is fundamentally at odds with this realism, because the reality is that LLMs aren't all that great.
like this
classic likes this.
UnspecificGravity
in reply to XLE • • •Yep. These guys WISH their dumb little toy was capable of world domination.
What it is actually capable of is inaccurate spreadsheets and B+ middle school history papers.
chicken
in reply to Powderhorn • • •I just wish this anger was kept focused at the corporations and the way regular people are being treated as disposable resources to be shut off from the wealth/output of our society so the rich can have it all, rather than people with inherent worth that should be supported because we are people.
like this
wagesj45 likes this.
wagesj45
in reply to Powderhorn • • •I'm fully in favor of using the power of the state to regulate AI data centers so that they're not exploitative to local resource limitations. I'm fully in favor of rewriting copyright law from the ground up for the modern era, to the extent that I think copyright should exist at all. I'm in favor of researching model-human interactions and assessing well-being of users and coming up with some forms of agency-respecting safeguards. I'm more than fully in favor of reigning in billionaire corporations whose only motive is profit and taxing the ever-loving shit out of them.
What I'm not in favor of us restricting the technology as a whole or acting like its use or even its very existence is a moral failure. I'm not in favor of stripping agency from adults because you think you know what's best for them.
Powderhorn
in reply to wagesj45 • • •like this
wagesj45 likes this.
ɔiƚoxɘup
in reply to Powderhorn • • •For my part, I have tried to be very clear-eyed about this and have been driven to learn and understand how it works. I want to know what its strengths are and what its weaknesses are. And mostly, I want to know how it's going to be used to further subjugate us. I have already seen how it has been used to deny people insurance coverage and for various and sundry other nefarious uses. Most recently, it has been suspected as being one of the prime causes of the deaths of many little Iranian girls. The future is now.
I feel that if I understand it well enough, then I will be better equipped in the coming struggle against it. I see how already people are losing their jobs at the mere possibility that AI might actually replace them and be able to do their jobs for them. And I relish the coming articles that go into detail about how badly it's been overestimated and how completely it has caused ruin in the companies that have chosen it over humans.
I use AI. I don't trust it, but I know what it's good at. I also know what it's terribly bad at Most importantly, I have seen it
... Show more...For my part, I have tried to be very clear-eyed about this and have been driven to learn and understand how it works. I want to know what its strengths are and what its weaknesses are. And mostly, I want to know how it's going to be used to further subjugate us. I have already seen how it has been used to deny people insurance coverage and for various and sundry other nefarious uses. Most recently, it has been suspected as being one of the prime causes of the deaths of many little Iranian girls. The future is now.
I feel that if I understand it well enough, then I will be better equipped in the coming struggle against it. I see how already people are losing their jobs at the mere possibility that AI might actually replace them and be able to do their jobs for them. And I relish the coming articles that go into detail about how badly it's been overestimated and how completely it has caused ruin in the companies that have chosen it over humans.
I use AI. I don't trust it, but I know what it's good at. I also know what it's terribly bad at Most importantly, I have seen it improving and it is becoming more concerning.Articles like this seem to indicate the possibility that AI will become much more affordable and much more powerful very soon. and yet it seems clear that it is being intentionally kept out of our reach.
I am not so concerned about AI causing an extinction-level event, but more so in its acceleration of our own self-destruction through many other means, not the least of which is climate change.
I am reminded of an old adage.
The thinking behind this being of course that if you make one mistake and you automate it then you have the opportunity of exponentially failing, where otherwise it would just be a simple typo that you could fix with correction fluid.
To the point mentioned earlier though, I don't think it matters as much whether the technology has advanced to the point where it can replace workers (it hasn't, but it's closer than I'd like), but that they believe that it already has and are replacing workers with it anyway.
/rant
Sorry for the bad writing PH, I'm tired.
Human brain cells on a chip learned to play Doom in a week
Alex Wilkins (New Scientist)Powderhorn
in reply to ɔiƚoxɘup • • •