I would like to see this more. As a heavy user of LLMs I still write 100% of my own communication. Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM.
When I was young, I used to think I'd be open minded to changing times and never be curmudgeonly, but I get into one "conversation" where someone responds with ChatGPT, and I am officially a curmudgeon.
A lot of the time open source PRs are very strategic pieces of code that do not introduce regressions, an LLM does not necessarily know or care, and someone vibe coding might not know the projects expectations. I guess instead of / aside from a Code of Conduct, we need a sort of "Expectation of Code" type of document that covers the projects expectations.
I can see how frustrating it is to wade through those and they are distracting and taking time away from them actually getting things fixed up.
>I'm of the opinion if people can tell you are using an LLM you are using it wrong.
They continued:
>It's still expected that you fully understand any patch you submit. I think if you use an LLM to help you nobody would complain or really notice, but if you blindly submit an LLM authored patch without understanding how it works people will get frustrated with you very quickly.
<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...>
That said, I don’t think a blanket "never post LLM-written text" rule is the right boundary, because it conflates two very different behaviours:
1. Posting unreviewed LLM output as if it were real investigation or understanding (bad, and I agree this should be discouraged or prohibited), versus
2. A human doing the work, validating the result, and using an LLM as a tool to produce a clear, structured summary (good, and often beneficial).
Both humans and LLMs require context to understand and move things forward. For bug investigation specifically, it is increasingly optimal to use an LLM as part of the workflow: reasoning through logs, reproduction steps, likely root cause, and then producing a concise update that captures the outcome of the investigation.I worked on an open source "AI friendly" project this morning and did exactly this.
I suspect the reporter filed the issue using an LLM, but I read it as a human and then worked with an LLM to investigate. The comment I posted is brief, technical, and adds useful context for the next person to continue the work. Most importantly, I stand behind it as accurate.
Is it really worth anyone’s time for me to rewrite that comment purely to make it sound more human?
So I do agree with Jellyfin's goal (no AI spam, no unverifiable content, no extra burden on maintainers). I just don’t think "LLM involvement" is the right line to draw. The right line is accountability and verification.
That said I understand calling it out specifically. I like how they wrote this.
Related:
> https://news.ycombinator.com/item?id=46313297
> https://simonwillison.net/2025/Dec/18/code-proven-to-work/
> Your job is to deliver code you have proven to work
love the "AI" in quotes
I know there will probably be a whole host of people from non-English-speaking countries who will complain that they are only using AI to translate because English is not their first (or maybe even second) language. To those I will just say: I would much rather read your non-native English, knowing you put thought and care into what you wrote, rather than reading an AIs (poor) interpretation of what you hoped to convey.
GenAI can be incredibly helpful for speeding up the learning process, but the moment you start offloading comprehension, it starts eroding trust structures.
We do that internally and I cant overstate how much better the output is even with small prompts.
IMO things like "dont put abusive comments" as a policy is better in that file, you will never see comment again instead of fighting with dozen of bad contributions.
One more reason to support the project!!
Sort of related, Plex doesn't have a desktop music app, and the PlexAmp iOS app is good but meh. So I spent the weekend vibe coding my own Plex music apps (macOS and iOs), and I have been absolutely blown away at what I was able to make. I'm sure code quality is terrible, and I'm not sure if a human would be able to jump in there and do anything, but they are already the apps I'm using day-to-day for music.
Should just be an instant perma-ban (along with closure, obviously).
"LLM Code Contributions to Official Projects" would read exactly the same if it just said "Code Contributions to Official Projects": Write concise PRs, test your code, explain your changes and handle review feedback. None of this is different whether the code is written manually or with an LLM. Just looks like a long virtue signaling post.