by Springtime
1 subcomments
- I thought this would be more about stylometry but it's mostly about users literally posting the same identifiable information across multiple services, including in one example their age, dog name, profession.
It's all classic dox profiling techniques. Even the things like spelling differences being regional signals and commonality to specific things being discussed.
It's why one has to think about what is being posted to which community if using different identities, rather than posting the same things across all of them. Though any such effort would be a waste if reliant on some non-public info that later was exposed in a database breach which tied together previously unrelated profiles.
- There was a tool shared here that could show which accounts belong to the same person based on the writing patterns. Can't remember the name, but it found my old accounts on HN pretty accurately.
by xtiansimon
0 subcomment
- > “This is a pretty new capability; previous approaches on re-identification generally required structured data, and two datasets with a similar schema that could be linked together.”
Right up there with Skynet, for me, has been the idea of disparate databases all being linked up by bad actors.
It appears as though DOGE illegally obtained taxpayer data from the IRS. I don’t trust DOGE to safeguard anything.
And the penalties do not seem to be very severe outside of HIPPA.
https://democracyforward.org/news/press-releases/new-details...
- The internet is getting less interesting by the day.
by kanemcgrath
1 subcomments
- Anonymous account unmasking represents a new threat to anonymity.
not just this technique with llms, but the earlier text similarity one.
But I think it would be generally easier to counter in the same way.
Use an llm or heuristics to pose as someone else.
not only do you erase your traces, you add false positives in to the system which reduces the overall effectiveness of these techniques in the future.
A bit of poisoning the well.
I hope eventually an easy to use tool, with maybe a small local llm, can make it easy enough to do this, so that any future deanonymization attacks would be too untrustworthy to rely on
by petesergeant
1 subcomments
- As a 32 year old Ghanaian woman living in Luang Prabang and studying as an ophthalmologist, this gives me some food for thought!
by ranger_danger
12 subcomments
- Only if said users happen to commit OPSEC failures themselves. LLMs aren't magic...
If someone can figure out who I am or what city I live in just by this username or my comments (with proof), I'll personally send you 500,000 JPY. I'm quite confident that's not going to happen though.
The paper referenced in the article does not even explain their exact testing methodology (such as the tools or exact prompts used) because they claim it would be misused for evil. In other words, "trust me bro."
Also see the previous discussion here: https://news.ycombinator.com/item?id=47139716
by shubhamintech
0 subcomment
- Stylometry is just the most legible version of this. The harder-to-defend surface: posting time patterns, topic clusters, cross-platform phrase matching, interaction graphs. LLMs synthesize weak signals at scale in a way no single analyst could, which makes the threat model fundamentally larger than "change how you write." Most OPSEC advice is written for the pre-LLM world.
- So tell an LLM what you would like the post to say, and then post the output?
LLM as the sickness and the cure...
- To state the obvious, we all need person, local tools to warn us when we’re making opsec errors.
- > If you request deletion of your Hacker News account, note that we reserve the right to refuse to (i) delete any of the submissions, favorites, or comments you posted on the Hacker News site
Probably not GDPR-compliant then if comments can be deanonymised by LLMs.
by Bombthecat
0 subcomment
- Figured this is going to happen. And it will just get worse.
I can already see palantir as the new man in the middle. Telling services: this guy with the same IP just posted xxx on yyy
by ChrisArchitect
0 subcomment
- [dupe] Discussion on source: https://news.ycombinator.com/item?id=47139716
- So um, can an AI also inject enough noise at the internet for me to make it harder to unmask me?
Should I like, just as Claude Code to come up with this idea this weekend?
by bitbasher
1 subcomments
- One solution is to flood the network with LLM slop and hide among the noise.
by akssassin907
0 subcomment
- [flagged]