Skip to main content


Large-scale online deanonymization with LLMs


in reply to FineCoatMummy

I've been expecting to hear of something like this, it's a natural evolution of LLM use cases and grimly inevitable.
in reply to FineCoatMummy

It’s a damn good thing I’m a gun toting Ohio libertarian that never lies online at all
This entry was edited (3 days ago)
in reply to grey_maniac

We should go to the range sometime to get away from those dang liberals😎
in reply to FineCoatMummy

So it seems that letting LLMs to write sloppy posts for us can be useful after all. May be c/privacy should implement an automatic AI reformating XD
This entry was edited (3 days ago)
in reply to corvus

Yah, there might be something to that. For protection against style + vocab matching.

It sucks though. I recently read where the more people use LLM assisters when they write, the more the whole virtual commons grows bland. It feeds back upon itself.

Sigh. I just want a world where we can have nice things. And assholes don't try to ruin the nice things we could have.

in reply to corvus

You’re absolutely right! It’s not just subterfuge—it’s praxis.
in reply to corvus

Previously, the advice was to translate your posts into one or two languages before posting. It seems that even rough content generated by large language models (LLMs) can help people fit in more easily.

I like how slop became "rough content" after translation.

in reply to FineCoatMummy

As if we need more lessons in how cautious we should be with what we're putting on the internet. What has been true 20 years ago hasn't changed.
in reply to FineCoatMummy

Yes I've been worried about exactly this. I'm sure it's very much within the realm of possibility these days.
in reply to FineCoatMummy

Do Throwaway accounts with no more than 6 months or 100 comments
in reply to FineCoatMummy

You can protect yourself by never giving away much info.


Which is why you should silo all online accounts and avoid linking them together.