We are likely going to get better in judging this new communication and media. But we need much more experience in it, until we can do that properly.
It will be annoying for quite a while, as it was with social media, until we found the places that are still worth our time and attention. But i am hopeful that we will be able to do that.
Until then i am going to work on my AI side project every evening until i deem it ready and bug free. It already works well enough for my own purposes (which i made it for) and my requirements were heavily influenced by my work process. I would never have been able to finish such a project, even with full time working on it over a year without AI.
Six months ago no-one would post a "Show HN" for a static personal website they had built for themselves - it wouldn't be novel or interesting enough to earn any attention.
That bar just went through the ROOF. I expect it will take a few more months for the new norms to settle in, at which point hopefully we'll start seeing absurdly cool side projects that would not have been feasible beforehand.
What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
Things on the front page used to be high quality software, research papers, etc. And now it is entirely driven by marketing operations, and social circles. There is no differential amplification of quality.
I don't know what the solution is, but I imagine it involves some kind of weighted voting. That would be a step towards a complicated engagement algorithm, instead of the elegant and predictable exponential decay that HN is known for.
I have zero interest in seeing something that Claude emitted that the author could never in a million years have written it themselves.
Its baffling to me these people think anyone cares about what their claude prompt output.
I love computing, and programming. If anything I'm better able to appreciate that now that I no longer care if my work has any impact.
Output is growing decoupled from what we used to consider tightly linked personal characteristics.
There is no guarantee that this will reform under rules that make sense in the old order.
It is embarrassing to see grown engineers unable to cope with the obvious.
I think we're going to implement this unless we hear strong reasons not to. The idea has already come up a lot, so there's demand for it, and it seems clear why.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://news.ycombinator.com/item?id=47077840
https://news.ycombinator.com/item?id=47050590
https://news.ycombinator.com/item?id=47077555
https://news.ycombinator.com/item?id=47061571
https://news.ycombinator.com/item?id=47058187
https://news.ycombinator.com/item?id=47052452
--- original comment ---
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (423 comments; subthread https://news.ycombinator.com/item?id=47050421 is about what to do about it)
AI makes you boring - https://news.ycombinator.com/item?id=47076966 - Feb 2026 (367 comments)
But other distribution strategies exist. You just have to be smarter about finding and getting in front of your core audience.
Solid solutions are being overshadowed by AI slop alternatives which were assembled in a few months with no long term vision; the results look great superficially, but under the bonnet, it's inefficient, expensive, closed, lacks flexibility, experience degrades over time. All the essential stuff that people don't think about initially is what's missing.
It feels like the logical conclusion of peak attention economy; the media fully saturated with junk where junk beats quality due to the volume advantage.
Productizing anything is hard and writing code with AI is basically impossible to do reliably, securely, and at scale unless you're already an expert in what you're trying to do. For example, working on a project now, and it's kind of endearing watching my AI buddy run into every single pothole I ran into when I first started working with Tauri or Rust.
Unless you know what you're doing (and why you're doing it), AI suggestions are in the best case benign, and in the worst case architectural disasters. Very rarely, I'm like "hm that might be a good idea."
I think AI-aided development will raise the bar for products and makes expert engineers like 10x more valuable. Personally, I'm elated that I don't have to write my 4000th React boilerplate or Rust logging helper anymore.
And the real, actual hard work (as in: coming up with new algorithms for novel problems, breaking problems down so others can understand them, splitting up code/modules in a way that makes sense for another person, etc.) will likely never be doable by AI.
Interviews by celebrities predicting AI will revolutionize the economy: 2837191001747
Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
People may lack ideas of interesting projects to work on in the first place; so we need to think about how to help people to think of useful and interesting projects to work on
Related to that idea, people may need to develop skills for building more "complex" ideas and we may need to think about how to make that more possible in the era of AI usage... even if some AI agent can take care of a technical side of things, it still takes a kind of "complexity of thought" to dream up more complicated / useful / interesting projects (I get the impression that there may be a need for some kind of training of the mind necessary for asking for an "automobile" rather than a "faster horse", by analogy... and that conception was often found through manually tinkering with intermediate devices like the bicycle. Hence an AI could "one shot" something interesting but what that thing is is limited by the imagination of the user, which may be limited by technical inability - in other words, the user may need to develop more technical ability in order to be able to dream up more interesting things to create, even if this "technical ability" is a more "skilled usage of AI tools")
There needs to be some way to filter through "noise". That's not a new issue and... a lot of these questions or complaints often feel very "meta", as in - you could just ask AI how to make side projects more interesting or useful, or how to create good filters in the age of AI. In sports, there are hierarchical levels of competition - likewise here you might have forums that are more closed off to newcomers if you want to "gatekeep" things, and they have to compete in "local pools" of attention and if they "win" there, then their "qualified authority / leader" submits the project to a higher level, and so on. AI suggests using qualified curators to create newsfeeds or to act as filters of the "slop".
What, were the vibe coded in COBOL?
That is: I don't understand why the use of Claude Code itself renders them unworthy of discussion.
I'm not sure if this article deserves all that much attention if the standard is a subjective interpretation of what is truly special.. human made, human directed, or not.
Is there some sort of spectrum of not special, kind of special, pretty special, and truly special?
Does it have to be special for everyone or just some people?
Is it trying to say that people by default build and share things for external validation?
The argument about how people are using AI to solve a problem is akin to how people might feel about someone using a spreadsheet to solve a problem.
Sometimes projects are for learning. Sometimes projects are for solving a problem that's small to others, but okay to you to solve.
Insecurity about other people learning to build things for the first time and then continue to learn to build them better might be what this is about, period.
There's always been a great number of problems that never could could quite get the attention of software development.
I've genuinely met non-software folks who are interested in first solving a problem and then solving it better and better. And I think that type of self-directed learning in software is invaluable.
AI makes slop, but humans sure seem to like creating the same frameworks over and in every language and thinking it's progress in some way. But every so often, you get a positive shift forward, maybe a Ruby on Rails or something else.
Come on. This site keeps promoting negative content.
It wasn't like I couldn't build before, it just makes it easier and a hell of a lot more fun now. I just did an AI side project and it was a blast. https://oj-hn.com
AI isn't going to take your job. People who know AI are.
It’s why I only focus on hardware actual “hacking” projects, more fun to read and follow, and I know it wasn’t vibecoded too.
Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects. It's especially lazy when that project is open-source and you can literally ask CC, hey dig into this code, did they build anything interesting or different? Peter's side projects like Vibetunnel and Openclaw have so many hidden gems in the code (a rest API for controlling local terminals and Heartbeat, respectively) that you can integrate into your own project. Neglecting these projects as "AI slop" is stopping you from learning what amazing things people are building, especially when those people have different expertise. Lest we forget, the transformer model came from Alphafold and sometimes the best discoveries come from completely unrelated fields.