Exclusive: AI Error Likely Led to Iran Girl's School Bombing
Exclusive: AI Error Likely Led to Iran Girl's School Bombing
Pentagon investigators believe a bombing of a girls' school in Iran on Saturday likely resulted from inaccurate information provided by AI.John Keough (This Week in Worcester)
like this

Tomtits
in reply to Chris Remington • • •like this
qupada likes this.
freagle
in reply to Tomtits • • •calliope
in reply to Tomtits • • •Oh for sure, they’re taking big tech’s lead.
Tech has been blaming AI for layoffs for a couple years now, when they hired an insane number of people during and after COVID. They literally hired to lay off. I found this graph illustrative of the boom and bust.
The people in this administration love when tech can get away with something (see the Cambridge Analytica scandal around 2016) because they will too.
✨Soff✨ Mawr :dragnwitch: (@mawr@plush.city)
✨Plush✨City 🏙kibiz0r
in reply to Tomtits • • •Yes. AI allows the user to separate output from understanding, accountability, and obligation. It can launder intention just as well as inattention. AI is the ultimate tool of fascism.
Edit: But I should mention, this is not new. Institutions have been pursuing techniques for this long before AI. Everything Was Already AI
- YouTube
youtube.comPowderhorn
in reply to Chris Remington • • •UnspecificGravity
in reply to Chris Remington • • •MolochHorridus
in reply to Chris Remington • • •cøre
in reply to Chris Remington • • •AI is not the problem, its the scapegoat. They want to be able to shrug and point at AI, saying there was a misclassification and that it wasn't their fault. Meanwhile they ignore the fact that a human at any point in time could have stopped the attack or double checked the target. They chose not to because they don't care. Collateral damage, wanton destruction, and civilian casualties is the goal.
like this
Undvik and qupada like this.
Kichae
in reply to cøre • • •This is the WHOLE point of why these generative models have been pushed so hard the past couple of years. They tested the waters to see if people would accept "it's the computer's fault" as an acceptable excuse, and then slammed on the gas.
Accountability sinks, as Dan Davies has named them, are the whole point. It's everything a slimy corporate CEO or government official has ever wanted.
No_Money_Just_Change
in reply to cøre • • •captchacrunch
in reply to Chris Remington • • •quick_snail
in reply to captchacrunch • • •WSJ was first to report that Anthropic's Claude AI was used in determining targets in Iran, but it's paywalled.
Futurism published an article asking the Pentagon about it, and the Pentagon refused to answer questions.
t3rmit3
in reply to quick_snail • • •quick_snail
in reply to t3rmit3 • • •bilouba
in reply to Chris Remington • • •orca
in reply to Chris Remington • • •quick_snail
in reply to orca • • •Reuters said it was the US, not Israel.
Both are capable of committing genocide, with or without AI. See history.
orca
in reply to quick_snail • • •Kwakigra
in reply to orca • • •orca
in reply to Kwakigra • • •Kwakigra
in reply to Chris Remington • • •