2026-02-11 20:57:59
2026-02-11 20:14:00
2026-02-11 20:13:36
17608790
I built a private AI companion where each user gets an isolated container with no data sharing
I have been thinking a lot about privacy in AI companionship. Most AI companion apps harvest your most intimate conversations for training data, targeted advertising, or both. Replika, Character.AI, and similar services all have troubling privacy practices.
So I built an alternative on Telegram: Adola
The privacy architecture:
- Each user gets their own isolated Docker container. Your conversations never share memory, context, or processing with any other user.
- Memory stays on disk, not in a cloud database. The agent maintains a local MEMORY.md file. There is no vector database, no cloud sync, no analytics pipeline.
- No training on user data. Conversations go to the model API for inference only and are not stored by us for any other purpose.
- No social graph. There is no friend list, no recommended contacts, no "people who talked to your bot also talked to..." features.
- No ads, no premium tiers, no monetization of conversations.
The tradeoff is that this is running on a single server and cannot scale to millions of users. But for the people who use it, the privacy guarantee is real.
I am curious what this community thinks about the privacy implications of AI companions in general. Is it possible to build something like this ethically, or is the very concept of an AI companion inherently at odds with privacy?

Flaqueman
in reply to reedbuilds • • •