A Man Bought Meta's AI Glasses, and Ended Up Wandering the Desert in Search of Aliens
At age 50, Daniel was “on top of the world.”“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”
It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.
“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”
That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.
And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.
A Man Bought Meta’s AI Glasses, and Ended Up Wandering the Desert in Search of Aliens
After encountering Meta AI, a man's mental health unraveled. As he lost touch with reality, the chatbot continued to affirm his delusions.Maggie Harrison Dupré (Futurism)

xxce2AAb
in reply to alyaza [they/she] • • •A user error is an error made by the human user of a complex system, usually a computer system, in interacting with it; sometimes used jokingly
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Kronusdark
in reply to xxce2AAb • • •calliope
in reply to alyaza [they/she] • • •It’s always kind of weird to me when articles describe insanely wealthy people in such a way that they try to make them sound normal.
A 50 year old who worked at the same financial services company (in my experience, finance programmer bros are awful a lot of the time).
No way, they lived an affluent suburb??
“He had it all… please ignore the many obvious issues.”
This article is asinine.
He was past being an alcoholic for months! How could this happen??
Mark with a Z
in reply to alyaza [they/she] • • •I believe that companies need to be held responsible for harm caused by their products, but this one might be on the user.
What can they do? Respond to every question about religion, spirituality, or aliens with links to mental health organizations?
calliope
in reply to Mark with a Z • • •The man in the article also was alcoholic until mere “months before buying the glasses.”
They are burying all the ledes to make a “regular guy goes crazy due to AI” story.
Megaman_EXE
in reply to alyaza [they/she] • • •I think the biggest thing is that people don't seem to understand how AI works. If they understood it's just predictive text and not like...actual intelligence, I think it would solve a lot of issues.
Part of this, though, is how AI companies keep pushing to personify their AI and keep making them sound better than they actually are. It makes people think its this magical thing
like this
Hexanimo and yessikg like this.
Scrubbles
in reply to alyaza [they/she] • • •Lucy :3
in reply to alyaza [they/she] • • •