To grow, we must forget… but now AI remembers everything
With OpenAI’s memory upgrade, ChatGPT can recall everything you’ve ever shared with it, indefinitely. Similarly, Google has opened the context window with “Infini-attention,” letting large language models (LLMs) reference infinite inputs with zero memory loss. And in consumer-facing tools like ChatGPT or Gemini, this means persistent, personalized memory across conversations, unless you manually intervene.The sales pitch is seductively simple: less friction, more relevance. Conversations that feel like continuity: “Systems that get to know you over your life,” as Sam Altman writes on X. Technology, finally, that meets you where you are.
In the age of hyper-personalization — of the TikTok For You page, Spotify Wrapped, and Netflix Your Next Watch — a conversational AI product that remembers everything about you feels perfectly, perhaps dangerously, natural.
Forgetting, then, begins to look like a flaw. A failure to retain. A bug in the code. Especially in our own lives, we treat memory loss as a tragedy, clinging to photo albums and cloud backups to preserve what time tries to erase.
But what if human forgetting is not a bug, but a feature? And what happens when we build machines that don’t forget, but are now helping shape the human minds that do?
DOC • To grow, we must forget… but now AI remembers everything
AI’s infinite memory could endanger how we think, grow, and imagine. And we can do something about it.www.doc.cc
like this
Cătă and RaoulDuke85 like this.

James R Kirk
in reply to alyaza [they/she] • • •TisButAScratch
in reply to alyaza [they/she] • • •High quality article. Thanks for sharing.
This reminds me of Black Mirror's "The Entire History of You". Definitely not a good idea.
chicken
in reply to alyaza [they/she] • • •I don't hate this article, but I'd rather have read a blog post grounded in the author's personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there's this attempt to make it sound like there's a lot of objective certainty to it that falls flat because of failing to draw a strong connection.
Like this part:
... Show more...I don't hate this article, but I'd rather have read a blog post grounded in the author's personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there's this attempt to make it sound like there's a lot of objective certainty to it that falls flat because of failing to draw a strong connection.
Like this part:
Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that "infinite memory" features of personal AI assistants has on people, just rhetorical speculation.
Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it's going for the people that didn't, and who use it for stuff like the given example of picking a restaurant to eat at.