In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.
There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.
It’s sad to see Ars Technica at this level.
Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.
Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.
Then, tech crunch wrote an article on our project.
I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
I thought that was rather strange, especially since we already had built up a relationship.
I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.
Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.
This is a classic systems failure: you remove the safety mechanisms, add a new source of risk, and punish the individual operator. It's the same pattern you see in industrial accidents. The Swiss cheese model applies — every editorial layer that got cut was a slice of cheese being removed.
The more interesting policy question is whether publications should be required to disclose AI tool usage in their editorial process, similar to how financial publications disclose conflicts of interest. The FTC has signaled interest in AI-generated content transparency but hasn't issued concrete guidance for journalism yet.
If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.
That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.
The expectation is to produce more with much less (staff), the pipeline is heavily optimized for clicks, every single headline is A/B tested- Ars isn't alone in churning out poorly reviewed clickbait (and then not owning their mistakes)
I really don't know where the internet is heading to and how any content site can survive.
Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.
1. An editor's role is not to make sure all quotes are accurate to what the source spoke or wrote. They are not babysitters. It's a higher-level position that is supposed to ensure that the article makes sense; that the sources are sufficient in number and are credible; that the story covers the assigned topic, etc. Your job as an editor is _not_ to babysit your reporters. Reporters are given a large degree of freedom and are expected to have sufficient education/training and ethical grounding* to do their jobs independently. What seems to have happened here is a lapse in judgement while the reporter was sick and on a deadline. Pressure's a bitch.
2. This is certainly more of an opinion, but AI tools have zero place in any profession that relies on information integrity (law, journalism). The reporter shouldn't have used it, and the editors and editorial processes are not at fault, especially when Ars already forbids AI-generated content.
* I'm aware people think reporters are liars with agendas. You don't need to say it.
Well, Ars Technica is already for quite some time on my ignore list, and this further solidifies its place there.
When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?
-Isaac Newton
The readers there are borderline militant about AI's more problematic uses. This could have gone only one way.
I'm skeptical. I hate to be the one to say it, but I don't think this would have happened if he was using Claude 4.6 Opus.
If the content is human written and you check your sources there is no way for AI to “accidentally” seep in. Sure you can use an AI tool to find links to places you should check and you can then go and verify sources. That’s obviously not what happened.
Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".
I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.
Imagine what he could have gotten up to with LLMs.
“Everyone knows that Perl is designed to make easy things easy, and hard things possible, but nobody knows why it’s called Perl.”
Which of course returns 0 results on Google, as is customary for famous quotes.This, right here. Coming from an "AI Expert", this is what we can expect the future to be. One AI isn't working? Let's ask the other AI why. I have no words for that reflex. It's beyond idiotic. It takes everything that's human about your reasoning and tosses it aside. What a dumb idea.
Which should be a red flag in and of itself. You don't need to "push" people to "find uses" for genuinely useful tools.
AI is not a tool and from the way things are going never will be. Humans are more tool-like in that sense. In this case the human was discarded, the AI remains.
The main comment I found relevant is probably this (There is more that he has written but I am pasting what I find relevant for my comment)
> I have been sick with Covid all week and missed Mon and Tues due to this, On friday, while working from bed with a fever and very little sleep. I unintentionally made a serious journalistic error in an article about Scott Shambaugh
... > I should have takena. sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words.
> Being sick and rushing to finish, I failed to verify the quotes in my outline against the original blog source before including them in my draft
The journalistic system has failed us so much that in the news cycle, we want things NOW. I think ars-technica post went viral on HN as well before the whole controversy and none were wiser until Sam commented about the false quotes.
It prefers views and to get views you have to do work now. There is no room left for someone being sick and I think that this sort of expands to every job at times.
And instead of AI being a productive tool, it can act as a noise generator. It writes enough noise that looks like signal and Tada, none are the wiser.
People think that using AI with an person is gonna make their work 10x more but what's gonna happen is the noise is raised 10x more and the work of finding signal from that noise is gonna increase 10x more (I am speaking about employment related projects, obv in personal projects this might not matter if it might have 10x noise or 100x noise if it can just do the thing you want it to do)
When AI systems are constrained, they can deny you your api request with marginal loss. But when Human people are constrained, they really can't deny your employee's request without taking massive losses at times (whole day leaves) and I have heard in some countries, sick days can be a joke. This could very well be cultural because sick days are well implemented in Europe compared to america (from what I hear)
I don't know about Benj but some reporters are really paid peanuts. Remember the pakistani newspaper which had pasted Chatgpt Verbatim with content like "“If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout perfect for maximum reader impact. Do you want me to do that next?." WITHIN the newspaper.
I believe that humans should be treated with more dignity so that they feel comfortable around taking sick leaves when they are sick... or just fixing this culture that we have of people chugging along in sick leaves.
Until then, AI is bound to be used, I don't think that this is gonna be a single incident, and AI will produce noise/spew random stuff. Imagine you are a journalist and you are sick and you feel like there's a magical tool which can do the job for you when you are sick. You use it and in the moments of sickness, you are in the IDGAF attitude and push the article to main.
I personally don't believe that this is gonna be a single incident with this whole story playing out like this at the very least.
If any Journalist is reading this, please take sick leaves when you are sick. Readers appreciate your writing and I hope you don't integrate AI tools into your workflow (a lot) that the work is started being done by AI in this case. Even without AI I feel like you guys might not be working at the best mental space and Readers are happy to wait if you add unique perspectives into the story, something I don't think is possible when you are sick. If any employer try to still pressure you, just share this message to them haha to tell your employer what the people want (and what brings them money long term).
I also hate how the culture has become of finding the article which came the fastest after an event happens because that would promote AI use more often than not and it to me feels like jackals coming out of nowhere to try to take whatever piece you can take out of a particular news and that to me doesn't feel soo great of look. (I know nothing about how such journalism works so sorry if I am wrong about anything, I usually am but these are just my opinions on the whole thing)
Obviously, we were rocked by the DrPizza scandal years ago...and now this.
Sobering.
This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.
That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.
But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.
> It also comes at a moment in which many media bosses are pushing staff to find uses for AI — as are executives across most industries — even while clear guidelines around use of the technology that uphold editorial ethics remain elusive.
Anyone who's working with computers now knows this is true. We're being pushed relentlessly to use AI; in some cases (I've heard second hand) people are mandated to use AI, sometimes forbidden from crafting code manually, and are disciplined if they don't. Yet the guidelines are very unclear, as they must be since if we're honest we're all threading new ground.
This, being mandated to use AI at all costs yet given very brittle/unclear indications on how to use it, and these guidelines evolve weekly, and also we're all fearful of losing our jobs, makes for a recipe for disaster.
So yeah, this journalist should have called in sick and use better judgment when toying with AI tools, but still there's a wider problem and the responsibility for this craziness is also on the leadership of most companies and the investors pusing for this.
(None of this is an excuse for generating AI slop. I hate it and I don't need to be told any guidelines about not doing it. If you cannot be bothered to write the text, I cannot be bothered to read it.)
A true "senior" AI reporter should be more skeptical of LLM output than anyone else.
I despise Conde Nast
Yeah, no.
I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.
The editors are the ones ultimately responsible for what they publish. Yet they’re not taking responsibility.
Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.