They're also the ideal place to try out new AI tools that your professional work might not let you experiment with.
(The headline of this piece doesn't really do it justice - it misuses "vibe coded" and fails to communicate that the substance of the post is about visual design traits common with AI-generated frontends, which is a much more interesting conversation to be having. UPDATE: the headline changed, it's now much better - "Show HN submissions tripled and now mostly have the same vibe-coded look" - it was previously "Show HN submissions tripled and are now mostly vibe-coded")
(maybe what this post calls "Icon-topped feature card grid." ...that might be the official design pattern term)
https://news.ycombinator.com/showlim (<-- this is what many accounts without much HN history now see, and it's responsible for the downtick to the right on OP's chart)
Ask HN: Please restrict new accounts from posting - https://news.ycombinator.com/item?id=47300329 - March 2026 (515 comments)
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (425 comments)
In 2016, if I saw 10,000 lines of code, that carried a certain proof-of-work with it. They probably couldn't help but give the code some testing as they were working up to that point. We know there has to have been a certain amount of thought in it. They've been living with it for some months, guaranteed.
In 2026, 10,000 lines of code means they spent a minimum amount of money on tokens. 10,000 lines can be generated pretty quickly in a single task, if it's something like "turn this big OpenAPI spec into an API in my language". It's entirely possible 90%+ of the project hasn't actually been tested, except by the unit tests the AI wrote itself, which is a great start, but not more than that for code that hasn't ever actually run in any real scenario from the real world.
Nothing about any of that in intrinsically wrong. But the standards have to be shifted. While the bar for a "Show HN" should perhaps not be high, it should probably be higher than "I typed a few things into a text box". And that not because that's necessarily "bad" either, but because of the mismatch between valuable human attention and the cheapness of being able to make a draw on it.
It's kind of a bummer in some sense... but then again, honestly, the space of things that can be built with an idea and a few prompts to an AI was frankly fairly well covered even before AI coding tools. Already I had a list of "projects we've already seen a lot of so don't expect the community to shower you with adulation" for any language community I've spent any significant time in. AI has grown the list of "projects I've seen too many times" a bit, but a lot of what I've seen is that we're getting an even larger torrent of the same projects we already had too many of before.
When the surface dwellers have become crazed by disease and war, and their lands contaminated with the detritus of broken promises of innovation and heavy metals, we must build a new Eden.
As much as I adore Gemini as a concept, I yearn to express myself in the visual medium. Dillo might honestly be enough to render something beautiful within its constraints. With Wireguard meshes as the transport, and invitations offered and withdrawn by personal trust, perhaps we can have a place where our ideas could once again flourish without being amplified and distilled into mediocrity by the great monoliths looming like thunderous currents on the horizon.
We can hope the LLMs hallucinate slightly different CSS once in a while now...
Also, would be good to show trends over time rather than just a one-time pie chart showing breakdown into arbitrary categories.
The other issue of HN being inundated with AI bots is related, but a kind of different problem.
http://www.catb.org/jargon/html/S/September-that-never-ended... https://en.wikipedia.org/wiki/Eternal_September
The advantage of having so many ideas being tried and published is we are exploring the space of possibility faster, and so there's more to learn from. The disadvantage is that signal to noise is way down. Also, because the system is self-reflective and dynamic, there's a natural downward spiral as the common spaces get overrun and we cannot coordinate signal. The Tragedy of the Commons.
I guess I spent 10 years worrying about this in my MeatballWiki era in my 20s, and now I'm in my midlife crisis era and prefer to just have fun with the world that I have.
I signed up for a Mobbin account to find inspiration only to find every app and website looks the same. I came to the same conclusion, “this isn’t bad but it’s certainly uninspired”
so, n=1 plus Baader-Meinhof? (https://en.wikipedia.org/wiki/Frequency_illusion)
Great job to everyone who has created something
Models have their own archetypes. Since early this year almost every vibecoded website is Opus, which has its own style. It has different characteristics from a website by GPT. Yet again different from one by Gemini. Each one has its own set of traits. Opus 4.5/4.6 traits are markedly different from earlier versions. Mixing them all into one and then using it to "identify AI coded websites" doesn't work.
But good thing is, it will now include those accessibility items, too. Personally I have misokinesia and migraines so I get it.
Here's what it found if you want to see: https://www.perplexity.ai/search/given-these-how-can-we-crea...
I use LLM models in my side projects like this guy uses them. So many times I spent days and weeks on a side project just to make sure it was perfect only to to have 0 interest from anyone else after sharing.
At least in the field I work in (ecommerce/retail), design is often what separates one brand from another when presenting their products. Maybe it won't happen on the web as much in the future, but I suspect it will still be important when it comes to visually communicating to consumers
Why? Let me guess: because these patterns were frequently seen in human-made sites too, but that won't fit the narrative.
Remember, several AI detectors claimed Declaration Of Independence was AI-generated[0]. Keep this info in mind when someone (like the author of this article) proudly shows you their home-made AI detector.
[0]: https://dallasexpress.com/state/zerogpt-flags-1836-texas-dec...
at my workplace the phrase in status/report-out meetings "I built" now means "I asked claude to build"
All of a sudden managers, architects (who haven't written code in a decade), and directors are all building tools
so now we're debugging the tools "they built" and why our product isn't working with them.
The UI of Electric Minds Reborn (Amsterdam Web Communities System) was not AI-generated. At most, it was AI translated, as I used Claude to help turn old clunky 2006-era HTML into modern styling with Tailwind CSS. See also https://erbosoft.com/blog/2026/04/07/to-ai-or-not-to-ai/.
This has been killing me recently. Apparently I need slightly higher contrast than some people, and these vibe coded UIs are basically unreadable to my eyes
Nooo please don't ruin great fonts by associating them with low effort vibecoding
They may be somewhat overused but they are popular for a reason
It’s entirely possible a Show HN I posted is included and I’d love to know how it scored.
That said, the AI slop problem is real. Most of it has very little depth. I'd love a sidebar tool that rates submissions on engineering rigor so projects with real technical depth don't get overlooked, and there's a clear differentiator between pure vibe-coding and engineering-backed work.
The more interesting question the post raises, at least for me, is that distribution platforms like Show HN, Product Hunt, etc. were designed for an era when launching something was costly enough to be a signal. When a weekend project can ship a production-looking landing page, upvotes on these platforms start selecting for whatever catches the eye fastest, not whatever actually solves a problem. The signal degrades.
I've been thinking about this a lot because I'm building a directory where you have to rank 5 other projects before you can post your own — trying to see if forced engagement produces better signal than one-click upvotes. Too early to say if it works, but I do think "how do we find the good stuff under the slop" is the real problem and it probably isn't solved by detecting AI design patterns.
Before, you could get away doing business with a basic 1-pager, which is about the same as what everyone else had, but these days looks lazy/incompetent.
You don’t have any more time to throw it together than you did before so… yeah I guess slop it is. Probably not going to be humans reading it past the front page anyway. If you want to engage humans, use LinkedIn or TikTok or something.
- all designs are going to be AI generated and look the same
- well unless you ask your agent to make it look different
Are we going to call 'AI slop' everything that doesn't reinvent design from zero for a marketing page?
In a sense it shows that the creator didn’t care enough to make their UI/presentation unique which causes some like me to question exactly how much effort they bothered to put in at all.
As part of our code security review we have a “sloppification” score. Higher numbers have been reliably usable by people like me as indicators of what to focus my pentesting efforts on.
Before the usual suspects get snarky: Does that mean AI only generates slop? No. But it is an indicator of effort and oversights.
Let’s take the opposite case, where someone handcrafted a website but the actual project/product was just a vibecoded mess? Is that not infinitely worse? Imo, what matters is what they actually made with the thing.
I get that these LLMs are pumping out ugly websites. But unless the product is a design system or website builder, it’s not my main concern.
I'm much more critical of closed-source, subscription, wrappers over open source software of simple prompts.
Heavy slop (5+ patterns) · 105 sites · 21%
Mild (2–4) · 230 sites · 46%
Clean (0–1) · 165 sites · 33%
Can we have a list of the "clean" ones please? Actually, if you give me a list of the IDs for all 3 categories, I'll make URLs for each that people can browse.If the community feels that the division is useful, then we can maybe take you up on your offer to open-source the project, and perhaps find a way to use it on HN itself.
Likewise, the issue is often that many of these projects show no evidence of long term maintenance. That might be the new signal we watch for?
There also used to be a sense in the tech community of "if you build it they will come" and that has been basically completely lost at this point. Between the discussion earlier this week of people's fraudulent GH stars, and this topic, and the wave of submissions I see on e.g. r/rust, it's just hard to imagine how -- as a pure "tech nerd" -- to get eyes or assistance on projects these days.
I have projects I've held off on "Show HN" for years because I felt I wasn't ready for the flood of users or questions and criticisms. Maybe the jokes on me. (Of course like everyone else these days, I've used AI to work on them, but much of them predate agentic tools.)
There is a longterm phenomenon, that quite a lot of pages are presented here, and not existent anymore after 12 months or so... This was already the case before the whole ai slop flodded in... But since then the rate just grew massively.
It's particularly annoying, when there is an actually useful service or app, you sign up, after a couple of months all is gone...
Then the question becomes, do we need to go back to hand-picking every single css element to avoid being suspected of vibe coding? Why is it ok for someone to generate a css template on the fly using shadcn, but not ok to generate styles using claude code? Will someone using shadcn be judged the same as someone using claude code for styles?
Personally what I think I'm seeing is a breaking down of walls. Now ideas that once would have gone back to the imagination vault finally have a pathway to reality.
Kind of off-topic - but why is there always so much focus amongst AI-bros on how good or whether or not LLMs are good at building UI? My shallow assumptions were that the reason is because that's what LLMs are particularly bad at.
But lately I've kind of gotten the sense that a lot of people seem to mostly be building UI stuff with LLMs. Weird.
In a climate where it seems like VC are woefully bereft of the same skills, there's an impetus to just slop garbage up for any vague idea, without taking the care or time to polish it into something which has that intangibly human sense of greatness and clarity.
I see, you've done something -- but why? If you continue to ask this question, you will arrive at good science ... but many submissions are not aimed at that level of communication or stop far ahead of the point at which the question becomes interesting.
There's that phrase: "better to remain silent and be thought a fool than to speak and to remove all doubt" which strikes as poignant, except it seems like the audience today are also fools ... the inmates are running the asylum.