Skip to main content


A rogue AI led to a serious security incident at Meta


in reply to along_the_road

"Rogue AI" as if it's some sentient evil thing when its just a llm with too many permissions... This timeline is so dystopian, but simultaneously incredibly lame i hate it.
in reply to GregorGizeh

It shows LLMs can do significant harm without the capabilities of an AGI.

Overhyping LLMs and overinflating their capabilities makes things worse, as people are less skeptical of LLM output.

This entry was edited (3 days ago)
in reply to along_the_road

"Flagrant security lapse caused an incident when software engineer uses inappropriate tool for the job."
in reply to Butterbee (She/Her)

"Inappropriate tool also weirdly good at gas lighting engineers and managers"
This entry was edited (2 days ago)
in reply to sem

Let's be real, managers gaslight themselves daily.
in reply to along_the_road

According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.


Producing innaccurate technical advice, with a confident tone, at scale.

If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.

This entry was edited (22 hours ago)
in reply to Hirom

That sounds sweetly naive. "Producing innaccurate technical advice, with a confident tone, at scale" sounds like the perfect credentials for a career in consultancy.
in reply to Tim

That's a good way to represent LLMs. Very bad and very prolific consultants.
This entry was edited (2 days ago)
in reply to Hirom

Yeah, they'd get promoted for sure where i work.
in reply to Hirom

Wait til this starts happening in the construction industry.
in reply to along_the_road

An ai apocalypse won't come from an ai becoming sentient, but from some idiot putting ai where it shouldn't be.
⇧