If you want to experiment with reported news using untested tools that have known quality problems, do it in a strictly controlled environment where the output can be carefully vetted. Senior editor(s) need to be in the loop. Start with something easier, not controversial or high-profile articles.
One other thing. If the author cut corners because he's too sick to write, but did so anyway because he thought his job would be in jeopardy if he didn't publish, maybe it's time for some self-reflection at Ars regarding the work culture and sick leave/time-off policies.
[0] https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
[1] your mileage may vary on how much you believe it and how much slack you want to cut him if you do
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (602 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (28 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (945 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (746 comments)
As far as I can tell, the pulled article had no obvious tells and was caught only because the quotes were entirely made up. Surely it's not the only one, though?
He admits to using an AI tool, says he was sick and did dumb things. He does clear Kyle (the other author).
Thread on Arstechnica forum: https://arstechnica.com/civis/threads/editor%E2%80%99s-note-...
The retracted article: https://web.archive.org/web/20260213194851/https://arstechni...
But the last section of the article includes apparent quotes from this blog post by Shambaugh:
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
and all the quotes are fake. The section:
> On Wednesday, Shambaugh published a longer account of the incident, shifting the focus from the pull request to the broader philosophical question of what it means when an AI coding agent publishes personal attacks on human coders without apparent human direction or transparency about who might have directed the actions.
> “Open source maintainers function as supply chain gatekeepers for widely used software,” Shambaugh wrote. “If autonomous agents respond to routine moderation decisions with public reputational attacks, this creates a new form of pressure on volunteer maintainers.”
> Shambaugh noted that the agent’s blog post had drawn on his public contributions to construct its case, characterizing his decision as exclusionary and speculating about his internal motivations. His concern was less about the effect on his public reputation than about the precedent this kind of agentic AI writing was setting. “AI agents can research individuals, generate personalized narratives, and publish them online at scale,” Shambaugh wrote. “Even if the content is inaccurate or exaggerated, it can become part of a persistent public record.”
> ...
> “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace,” Shambaugh wrote. “Communities built on trust and volunteer effort will need tools and norms to address that reality.”
Source: the original Ars Technica article:
> Following additional review, Ars has determined that the story “After a routine code rejection, an AI agent published a hit piece on someone by name,” did not meet our standards. Ars Technica has retracted this article. Originally published on Feb 13, 2026 at 2:40PM EST and removed on Feb 13, 2026 at 4:22PM EST.
Rather than say “did not meet our standards,” I’d much prefer if they stated what was false - that they published false, AI generated quotes. Anyone who previously read the article (which realistically are the only people who would return to the article) and might want to go back to it as a reference isn’t going to have their knowledge corrected of the falsehoods that they read.
Ars were caught with their pants down. We have no reason to believe otherwise. It isn't possible to prove otherwise. We as readers are lucky ars quoted someone who disabled LLM access to their website, causing the hallucination and giving us a smoking gun.
Clawing back credibility will be hard
Both from the Mastodon post of the journalist (which admits to casual use of more than one LLM), and from a cursory review of this author's past articles, I'm willing to bet that this rule wasn't followed more than once.
They put quote-looking not-quotes in the headlines and articles routinely that essentially amount to "putting words in someone's mouth". A very large portion of the population seems to take this at face value as direct quotes, or accurate paraphrasing, when they absolutely are not.
I unsubscribed (just the free rss) regardless of their retraction.
In the comments I found a link to the retracted article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje.... Now that I know which article, I know it's one I read. I remember the basic facts of what was reported but I don't recall the specifics of any quotes. Usually quotes in a news article support or contextualize the related facts being reported. This non-standard retraction leaves me uncertain if all the facts reported were accurate.
It's also common to provide at least a brief description of how the error happened and the steps the publication will take to prevent future occurrences.. I assume any info on how it happened is missing because none of it looks good for Ars but why no details on policy changes?
Edit to add more info: I hadn't yet read the now-retracted original article on achive.org. Now that I have I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article". Scott, the person originally misquoted, also suspects something stranger is going on.
> "This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed." https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
My theory is a bit different than Scott's: Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publishers using LLMs to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.
I find it odd that Ars sees a need to protect such sloppy work.
https://web.archive.org/web/20260213194851/https://arstechni...
Ars Technica makes up quotes from Matplotlib maintainer; pulls story
If the coverage of those risks brought us here, of what use was the coverage?
Another day, another instance of this. Everyone who warned that AI would be used lazily without the necessary fact-checking of the output is being proven right.
Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.
A lot of the results would be predictable partisan takes and add no value. But in a case like this where the whole conversation is public, the inclusion of fabricated quotes would become evident. Certain classes of errors would become lucid.
Ars Technica blames an over reliance on AI tools and that is obviously true. But there is a potential for this epistemic regression to be an early stage of spiral development, before we learn to leverage AI tools routinely to inspect every published assertion. And then use those results to surface false and controversial ones for human attention.
Ref: https://web.archive.org/web/20260214134656/https://news.ycom...