Skip to main content

in reply to Chris Remington

Woz has a better take than the vast majority of people the MSM tends to interview. I'm not surprised (he seems pretty technically competent in general), but it's definitely a breath of fresh air.
in reply to XLE

He says

"too dry and too perfect, and I want something from a human being, and I'm disappointed a lot."


I think this is the problem. It’s not human. Stop having this expectation.

in reply to org

It’s not human. Stop having this expectation.


Then it needs to stop being interjected and trying to take over the Human aspect of tech, art, creativity, etc.

Till then...

in reply to PenguinCoder

It’s not doing that.

People are using it for that.

Big difference.

You’re the one humanizing it. This is like claiming a tree is trying to be a musician because someone made a guitar out of it.

Stop humanizing AI.

in reply to XLE

(he seems pretty technically competent in general)


My god this is funny.

I know you’re being completely sincere, but taken out of context, that bit in parentheses is hilarious.

in reply to magnetosphere

Yep, the guy who figured out how to use NTSC to show colors in 4 days of coding because he wanted to play breakout with colors, with a $1 chip.

I'm pretty sure his plane crash robbed us from great things.

in reply to a_gee_dizzle

It was in 1981 and he had to take a break from working at Apple due to his injuries
in reply to EatMyPixelDust

Oh thats a shame. If this hadn’t occurred would Apple be more friendly to open source these days?
in reply to a_gee_dizzle

Not a chance. By the time he left Apple in the 80s they had already gone full corpo
in reply to a_gee_dizzle

So he had an accident while flying a cesna that he shouldn't be flying, and suffered a head injury.

Infinite Loop, the book about Apple stuff, said that "Coming out of the semi-coma had been like flipping a reset switch in Woz's brain. It was as if in his thirty-year old body he had regained the mind he'd had at eighteen before all the computer madness had begun. And when that happened, Woz found he had little interest in engineering or design. Rather, in an odd sort of way, he wanted to start over fresh."

in reply to massive_bereavement

Thats very interesting. Did he just retire after that or start doing something else?
in reply to magnetosphere

Woz deserves all the accolades he gets; I was mostly thinking of everybody currently in tech: CEOs who were good at one thing (maybe less), but have decided they're geniuses across all fields because they succeeded in one. That's especially prevalent in the AI sphere, with many of the employees "speaking out" against AI just repeating the same baseless claims as the CEOs.

Far be from me to criticize mainstream media, but if I were a betting woman, I'd guess CNN was disappointed they didn't get the typical apocalyptic prophecies.

in reply to Chris Remington

in reply to CerebralHawks

Behind the Bastards did a great episode on Steve Jobs in case anyone is interested

podcastindex.org/podcast/66646…

in reply to CerebralHawks

Y’all I think this guy might actually be Steve Wozniak 👀
in reply to CerebralHawks

Never heard of this guy before, he sounds cool, if he was just the co-founder of apple but parted ways I wonder why is he still relevant? I guess he must have been doing something else during all this time and now I have to dig up his Wikipedia to learn, bye
in reply to Mothra

He buys (or used to anyway) sheets of $2 bills from the treasury,cuts them himself, and puts them together into a little packet. Then in tipping situations he'll peel off a wad, just to see peoples reactions for the lolz
in reply to Mothra

in reply to prole

Woz is not a billionaire. He's also the closest thing that exists to a single human who invented the personal computer and universal remote control. He gave away almost all of the money he earned.
in reply to Chris Remington

In context, it sounds like he’s “disappointed a lot” by people choosing to use AI, which is a crucial distinction. His objection is about the kind of society we’re sleepwalking into, not the technical maturity of the current crop of software.

AI's generated text is "too dry and too perfect, and I want something from a human being, and I'm disappointed a lot."
This entry was edited (1 week ago)
in reply to kibiz0r

It's almost like they know people aren't going to read past the headline.
in reply to Chris Remington

There are also those who slam people for having negative opinions of AI. Mustafa Suleyman, CEO of Microsoft's AI group, called public criticism of the tech "mind-blowing," Nvidia's Jensen Huang says the negativity is hurting society, and Nadella has pleaded to move the conversation beyond "AI slop."


Then stop serving us AI slop. Y'all get paid way too much to claim that your products aren't what they are.

in reply to BarneyPiccolo

I mean this is like a casual converstaion on his experiences with it. Its not him announcing a stand on the technology.
in reply to Chris Remington

As a software engineer, I'm convinced "vibe coding" is just a meme. It's like watching a chaotic system. You need to constantly be wrangling it back on topic, and keep it from bloating the codebase, in order to get anything done. You may be able to vibe a small mockup, but it will inevitably go off and produce garbage that doesn't make sense.

It is useful as a glorified grep, and a sort of natural language to programming language compiler for simple descriptions. But if you don't already understand what you expect the LLM to output, you're gonna have a bad time.

This entry was edited (1 week ago)
in reply to teawrecks

A group at the web dev team for my company started a Dev Club to meet on Zoom to discuss code info and showcase dev work from them and other dev teams. Being a sysadmin scripter, I wanted to join to get experience in professional dev workflow as a creeper in the shadows. Then genAI code helpers happened and the guy leading it, the senior web dev at my company, started to use Claude and other genAI tools. Now, that's like all they talk about and showcase. Like 2 or 3 devs decided to outsource their thinking and now that's all the convo is about. Not even about their stuff, but leading genAI developments.

I stopped attending the monthly Zoom meetings. Not sure how many in there are into genAI, but since it wasn't my bag, I didn't want to say anything and just declined the recurring calendar event. Maybe I'll start my own dev club with blackjack and hookers!

in reply to Ænima

I think this sort of thing happens in software engineering a lot... it doesn't matter whether genAI tools are the right solution to a problem. Billionaires are throwing an alarming amount of money at this, so they are basically trying to get a slice of that pie by virtue signaling that they love genAI, even if they don't think it's that valuable.

My employer wants features delivered, I don't have time to circlejerk about genAI.

in reply to teawrecks

LLMs for coding has improved dramatically over the past year or so. But, I find that its quality varies greatly, depending on the model. I find models like Gemini and GPT to be too overconfident, and it doesn't communicate well enough. Claude knows when to stop and evaluate the situation for options. I've had mixed results with the local models, but I'm still adjusting quantization settings to make it work best with my VRAM.

You still need the skills to understand programming and design engineering, and you frankly need the personality to be meticulous with your reviews, but it's really nice having something that can code 3-8x faster than what I was doing before.

in reply to Chris Remington

I run a microscopy facility at a university.

In the last 2 years, software companies have tried selling AI-based image correction and quantification software.

Fuck no, you cannot do biomedical research by generating slop and calling it data. No one is buying this software.

in reply to Chris Remington

It sucks so much and they are forcing it everywhere trying to find ways to not pay people for labor.

Its all just generic shit

in reply to Chris Remington

I get his and stallmans objections or disapointment in this case but really it is just another abstraction of search. which by the way was not expected to give perfect answers to questions. one of the top bad things with llm's is the expectation by some that what they send back can be just used without review.
in reply to HubertManne

It's not an abstraction of search, though. It's a conditional regurgitation of the entire Internet with randomization. That is significantly and meaningfully different.

It's not finding text or context matches and reproducing them, it's guessing the next word based off of the steaming pile of horse shit people have dumped over the Internet in attempts to garner attention or scam others.

in reply to Kichae

from my experience despite the difference in process it does about as well. This is one reason it providing sources for its answers is so important. Its funny how in social media its so common to get the response. source? but many folks don't care if the llm gives them it.
in reply to HubertManne

which by the way was not expected to give perfect answers to questions


Except that's how a lot of people treat it. And there's so way to guard against that.

in reply to uuj8za

yeah that is the problem. although you did have issues with people self diagnosing through google before chatgpt. the problem is the more it seems like an answer the larger the group of people who are going to take it as one. Except for the small opposite group who gets their hackles raised when they get the response that way. Which includes me. Still them giving sources and people using them is I think the best we will get.