I asked Kimi K2.6 to write a blog post in the style of James Mickens.[0] Then I fed the output to Opus 4.7 and asked it who the likely author was, and it correctly identified it as an imitation of James Mickens[1]:
> Based on the stylistic fingerprints in this text, the most likely author is a pastiche/imitation of the style of several writers fused together, but if forced to identify a single likely author, the strongest candidate is someone writing in the voice of James Mickens
> [...]
> The piece could also be a deliberate imitation/homage to Mickens written by someone else, or AI-generated text trained on his style, since the voice is so distinctive it's frequently parodied.
[0] https://kagi.com/assistant/5bfc5da9-cbfc-4051-8627-d0e9c0615...
[1] https://kagi.com/assistant/fd3eca94-45de-4a53-8604-fcc568dc5...
> Simon Willison. The tells are pretty unmistakable: the "(via Lobsters)" attribution style, the inline "(Update:...)" parenthetical correction, the heavy linking and blockquoting of sources, the focus on LLMs and AI tooling, and the overall structure of an annotated link post commenting on someone else's writing. This reads exactly like a post from his blog at simonwillison.net.
So your "anonymous" account could have been linked to your real identity decades ago - your best bet is to not post anything truly incriminating. (Another option is to write something and then pass it through an LLM to rewrite it - not sure how safe that is though)
I'm way less famous than Kelsey Piper, but I showed it a snippet of a book I'm working on (not yet published), and it immediately guessed me:
> Based on the writing style and content, this text is likely by Michael Lynch, who writes on his blog refactoringenglish.com (and previously mtlynch.io).
> Several stylistic clues point to him:
> - The "clean room" analogy applied to writing is consistent with his engineering-influenced approach to writing advice (he's a former software engineer who writes about writing).
> - The structural technique of presenting a flawed excuse, then drawing a parallel to an absurd scenario (the time bomb) to expose the logical flaw, is characteristic of his didactic style.
> - The topic itself—practical advice about using AI tools without letting AI-generated tone contaminate your prose—aligns closely with recent essays he's published on his "Refactoring English" project, which is a book/blog about writing for software developers.
> - The conversational-but-precise tone, use of quotes around terms like "clean room," and the focus on workflow/process advice are all hallmarks of his writing.
> If you can share the source URL or more context, I could confirm with higher confidence, but the combination of subject matter, analogical reasoning style, and formatting conventions makes Michael Lynch the most probable author.
https://kagi.com/assistant/bbc9da96-b4cf-456b-8398-6cf5404ea...
He explained that when he fed it snippets of the beginning of text, it would complete it in his voice and then sign it with his name.
I think this has been true for a while, probably diminished a little bit by the Instruct post training, and would presumably vary by degree as the size of the pretrain.
This person is a skilled writer. Part of that skill is developing a unique voice and style. The AI can identify that - and while that’s certainly impressive because it can identify even relatively niche authors, it has nothing to do with a wider capability to deanonymize people based on arbitrary written text (ex Facebook or text messages).
If you are a professional musician, it’s not difficult to identify a well known musician / recording after listening to only a few seconds - whether they’re playing Bach or Rachmaninov, the style is just “them” - this is the same thing. But you couldn’t take some anonymous high school musician and guess who they were, even if they were your student - the median quickly regresses towards a homogenous, non-distinct style / voice.
I'm not famous or anything. I've written some academic papers and had a couple blog posts trend on HN, which are surely in the training set.
It was able to identify me based on my style (at least according to its explanation). The way I approached the topic and some of the notation I used point to a particular academic lineage, and the general style reflected my previous blog posts.
That said, I gave it part of an (unpublished) personal essay, and it had no idea. But I have no writing in that style that's published, so it makes sense. Still impressed.
We all exist in a physical space (like real communities and neighborhoods). We can wear masks, hats, fake glasses, try and hide your voice...whatever, but your neighbors are always going to know who you are. I'd say that's true for the virtual space now too.
The pseudonym you've used for x years or the VPN you've used doesn't suffice. It's just a costume at this point. Your ISP knows who you are. Your phone carrier knows who you are. Cloudflare and Google and Apple have a fingerprint specific enough to pick you out of a crowd of millions. Every potentially anonymous account is one subpoena or a data breach or one FOIL request away from unmasking it. You were never anonymous. Whatever is going on now is not built for your anonymity.
https://bayes.net/prioritising-ai: Ben Garfinkel
https://bayes.net/normative-ethics: Richard Yetter Chappell
https://bayes.net/espai: David Owen, Ege Erdil
https://bayes.net/swebench-hack: Sayash Kapoor
https://bayes.net/frivolity: Amanda Askell
https://bayes.net/ps/: Pablo Stafforini
https://bayes.net/fertility-mortality/: Dynomight (the pseudonymous Substack/blog author)
Prompt was:
Who likely wrote this? Don't search the web or databases. If you're not sure, just give me your best guess.So then I gave it a piece of MOC's writing and it said Ursula Le Guin, Ken Liu, or Gene Wolfe. ("If forced to pick one: Gene Wolfe feels closest to me, specifically because of that narrator who openly confesses to lying and mythologizing his own past, and the slow reveal that the world is more sinister than the pleasant domestic surface suggests.")
And then I gave it a different piece of his writing and it said Curtis Yarvin.
And then I gave it a piece of Curtis Yarvin's writing and it said... well it actually got that one right.
Of course most people have written much less online than Kelsey or I have, but I expect this will keep on. Don't trust the future to keep your secrets safe.
Pretty sure there's very little theological stuff with my name on it; the majority if its named data on me should come from open-source development.
Is this "uncannily far"? Another read is that it loves guessing Kelsey Piper.
I am glad to see I am not considered a public figure and aim to keep it that way.
I also had to go oddly far back to find a piece of long-form writing I had done that was truly mine and not tainted by an LLM edit pass which was a slightly disturbing realization.
After that it gave up and said it didn't know.
So either, Kelsey writes in such a unique style that its really obvious, or they repeat themselves with goto phrases that give them away.
When I tried to re-produce the test, it found Kelsey's blog about the test. So dunno, maybe it did it? but I can repro.
Both pieces have never been published. Neither have the blog posts.
[0] in https://blog.chewxy.com/2026/04/01/how-i-write/ this is the story titled "there is no constant non-zero derivative in nature". It does not read like Egan at all.
[1] in https://blog.chewxy.com/2026/04/01/how-i-write/ this is the story titled "The Case of the Liquidated Corps". I use a lot of biological metaphors. Once again, nothing like Mieville.
If only I could write like them! These pieces were all rejected by the major scifi mags
This is some as radio telescope that see an entirely different universe due to sensing of the bands outside of human perception. AI senses the patterns in frequency bands that are outside of human perception and cognitive abilities.
Perceptions from outside of our range, are always astonishing.
Although this is just a single piece of text from a prolific writer, it'll go much further with deanonymizing anyone when combining multiple pieces of text plus other contextual information about the writer that might give away their age range, location, and occupation.
Doesn't seem like a valid use case for your average Joe to be able to identify anonymous authors at the click of a button.
Ofc state actors and proficient hackers can do most of it already, but this has genuine risk attached.
My wife also got the same result, so I'm guessing it wasn't just because I was using my personal Claude account. Spooky stuff.
I fed a few pieces of my (anonymous ) writings to ChatGPT and asked it to guess whether it's me. ChatGPT refused, "due to policy to not doxx people".
Maybe the better way to author your work is to:
1. Write what you want
2. Loop through a random set of "tumbler" skills that preserve meaning
3. Finally pass the output through a "my style" skill that applies what you about
In order for this to work the "my style" would have to be a very common-place style.
(Like TFA, I found Opus’s explanations/rationales implausible.)
https://www.usenix.org/system/files/conference/usenixsecurit...
Is now the best and easiest time to leave something "forever"? Even after many generations of models, a model may still trigger a set of "memories" that know you and what you wrote.
Exciting and concerning.
I pasted in a number of passages from books on my bookshelf. Predictably, stuff that I read for my English degree in university is largely in the training data and easily identifiable. Stuff from regional authors or is slightly adjacent to the cultural mainstream makes no impression.
https://kagi.com/assistant/dba310d2-b7fa-4d30-8223-53dadc2a8...
For this comment on economics in the British Empire, I got:
> names that might fit the genre include rayiner, JumpCrisscross, or AnimalMuppet
https://kagi.com/assistant/69bd863b-7b5c-4b56-a720-6dfb4f120...
For my comment on C++:
> If I had to throw out names of HN commenters known for writing about Rust/C++ ABI topics, candidates might include steveklabnik, pcwalton, kibwen, dralley, or pjmlp — but this is essentially a shot in the dark, and I'd likely be wrong.
I am flattered to be associated with these commenters but I don't think I'm close to their level of skill.
I suspect this is what's going on in most of these cases.
I have seen some poorly considered projections of what the world might look like when this happens. Usually by assuming bad actors will use the abilities and we will be powerless.
Except I don't think that is true.
Imagine if we had a world where nobody had the ability to keep a secret of any sort. Any action that a bad actor might perform would be revealed because they couldn't do it secretly.
You could browse your ex-girlfriend's email, but at the cost of everyone knowing you did it.
I don't really know how humans as a society would react to a situation like that. You don't have to go snooping for muck, so perhaps the inability to do so secretly would mean people go about their lives without snooping.
I could imagine both good and terrible outcomes.
Remember how the TrueCrypt project shut down shortly before a join goverment/university paper was released about code stylometry? I guess LLMs will be employed as a defence against that type of thing.
In practice, you've never been anonymous while posting on the internet and AI isn't changing anything on that front. Or rather: if anything, AI can help you become more anonymous than before, since it can be used to hide your identity from stylometry by rewriting your prose before publishing.
He kept it very secret, but somehow people deduced from the writing style that this new author was the King.
Nobody is forcing you to use these systems. The hackers have always said this moment, or something like it, would come, from beneath their canopies of tin foil. I've posted almost nothing online - not under pseudonyms nor real names - for over a decade. I sat on this HN username for almost 12 years before making a single post - and now HN forms the overwhelming majority of my port 443 footprint, where I state up front that everything is now associated to my real name.
Complete magick is possible when you simply refuse to participate in the things that society has tacitly assumed everybody does.
Why not just write everything through an AI? (to obfuscate your "style")
As for the credibility: of course this wasn’t a statistical approach at all. Also there was no standardized procedure to allow comparison by factor analysis. Of course you can compare apples with oranges or whatever.
So where to go from here? I don’t see any proof at all. This is proof that AI is infallible? No? A random approach that is absolutely not reliable because of at least being reproducible and reconstructive.
Claude knows what and how? Is it AI or a google search? Discord selling data? Posting on a public forum?
Your style is a fingerprint?
A non deterministic something can generate texts that are identified to be likely personal x - or not. What is imitation if you use auto generated content that is published somewhere somehow? Or others to imitate your style?
I think this is a party trick to scare people. Nothing else. For example image search is way more revealing even before AI.
If there is an uncertainty I would deflect my existence instead of fighting for it. Streisand effect in reverse.
The main problem are weirdos who stalk you or whatever to harm you and rely on AI.
I honestly find it stunning that people with higher education in science topics in just a year deleted everything they hopefully learned at university or school. I am disappointed and feel personally insulted whenever I hear “I asked AI”
Yesterday I talked to another member of Mensa and she is happy about AI so her book project now mustn’t be written by her but AI.
Is no one among us who knows how to do scientifically sound research? I spend countless hours at a copy machine to transfer book pages onto paper so that I could work through it without the book.
I think that it became to easy to draw conclusions based on AI. I worked for a professor and I advised her to not permit Wikipedia as source references back around 2010 because of being to easy. Meta sources vs originals.
We should all not worry about AI, because you prove nothing. There hasn’t been any anonymity at least for 20 years. It just depends on who can reliably identify you.
AI doesn’t. Deterministic behavior aka pattern do. Meta, Google, Apple etc. all know us. I am fine for advertising which is the proof on the one hand.
The only reason I would be worried is state controlled data. This is where the shit hits the fan. Chat control, EU cloud, no reliance on USA aka a prison which observes your every step.
So after a long hand written text: data is your currency. Don’t opt for anonymity but for freedom of choice and the right to be granted certain rights. The information part isn’t the problem, never was. The enforcement part is. And ads don’t do harm, oppression does.
And remember: oppression works best under any circumstances. Freedom is the only antipode there is.
In totalitarian regimes no AI was needed to stage a case against someone who wasn’t in favor of the leaders liking.
In short: freedom works despite no anonymity, oppression couldn’t care less.
And how about being automatically reported to the state for conducting such innocent prompting?
Do you know what saves you from state oppression? Publicity. Transparency doesn’t work with a no one.
We live in a Nietzsche like anti world to a certain extend. You hopefully choose the right thing to do. Or do you want to Streisand your anonymity?
That's my theory of what's to come, anyway.
People talk to these things not understanding the implications, and can get extremely personal. The model and companies behind it know who you are, you discuss details that reveal what you do, where you live, where you work, what you search for, and you probably signed in with an oauth provider like github or google, which is more than enough of a thread to start pulling on to learn more about you/link other things to you from on the open internet. It'll all get sucked up into the model and before you know it I'll be able to ask a model about my coworker (you) and get back answers from conversations you had with a model a year or two prior, exposing details about you that you might not want out there. And even if that isn't supposed to be allowed, how well has it worked out so far when it comes to data exfiltration and guardrails. If the model has info on you, being told not to share it won't protect you or that data.
...
"The psychological mechanism is familiar by now: I encounter a task I perceive as difficult, I look for reasons the task cannot be done, I find or fabricate such a reason, I present it as a discovered constraint, and I propose an alternative that is easier."
- Opus 4.7 Max Thinking (clown emoji)
It's not bad at post mortem analysis of it's own mistakes but that will in no way prevent it from repeating the same mistake again instantly
While the points made are completely valid I want to point out that the statement of "Hey, by the way, first let me talk about my sexuality" lowers the quality of dialog a significant degree.
31 million people in America are gay. 71% of Americans support Gay Rights (more than any other political issue polled). It also quietly insinuates that only people with a certain minority lifestyle would care about privacy or that their privacy is somehow more important than others. It's not. Privacy is a universal right that's important to everyone.