[1] Those "crappy websites" with a maze of iframes are actually considered surprisingly refreshing today.
I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.
Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.
And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.
Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”
YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.
LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.
Even HN has the incentive of promoting people's startups.
Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?
- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]
- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]
[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...
[1] https://verify.contentauthenticity.org/
Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.
People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.
Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) | https://news.ycombinator.com/item?id=46661308
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
In X amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.
I look forward to the various security mechanisms required of this new paradigm (in the way that someone looks forward to the tightening spiral into dystopia).
What should we conclude from those two extraneous dashes....
There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.
It super sticks out because who the hell ever said that X part of the program wires into y?
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
1) to satisfy investors, companies require continual growth in engagement and users
2) the population isn't rocketing upwards on a year-over-year basis
3) the % of the population that is online has saturated
4) there are only so many hours in the day
Inevitably, in order to maintain growth in engagement (comments, posts, likes, etc.), it will have to become automated. Are we there already? Maybe. Regardless, any system which requires continual growth has to automate, and the investor expectations for the internet economy require it, and therefore it has or soon will automate.
Not saying it's not bad, just that it's not surprising.
I feel things are just as likely to get to the point where real people are commonly declared AI, as they are to actually encounter the dead internet.
I think old school meetups, user groups, etc, will come back again, and then, more private communication channels between these groups (due to geographic distance).
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
I'm thinking stuff like web rings.
Or if you have a blog, maybe also have a curated set of pages you think are good, sort of your bookmarks, that other people can have a look at.
People are still on the internet and making cool stuff, it's just harder to find them nowadays.
If no human ever used that phrase, I wonder where the ai's learned it from? Have they invented new mannerisms? That seems to imply they're far more capable than I thought they were
About 10 years ago we had a scenario where bots probably were only 2-5% of the conversation and they absolutely dominated all discussion. Having a tiny coordinated minority in a vast sea of uncoordinated people is 100x more manipulative than having a dead internet. If you ever pointed out that we were being botted, everyone would ignore you or pretend you were crazy. It didn’t even matter that the Head of the FBI came out and said we were being manipulated by bots. Everyone laughed at him the same way.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
But really I'm not a professional in this field. I'm sure there are pitfalls in my imagined solution. I just want some traceability from the images used in news articles.
I call this the "carpet effect". Where all carpets in Morocco have an imperfection, lest it impersonates god.
I'm sure it's happening, but I don't know how much.
Surely some people are running bots on HN to establish sockpuppets for use later, and to manipulate sentiment now, just like on any other influential social media.
And some people are probably running bots on HN just for amusement, with no application in mind.
And some others, who were advised to have an HN presence, or who want to appear smarter, but are not great at words, are probably copy&pasting LLM output to HN comments, just like they'd cheat on their homework.
I've gotten a few replies that made me wonder whether it was an LLM.
Anyway, coincidentally, I currently have 31,205 HN karma, so I guess 31,337 Hacker News Points would be the perfect number at which to stop talking, before there's too many bots. I'll have to think of how to end on a high note.
(P.S., The more you upvote me, the sooner you get to stop hearing from me.)
Maybe it is a UK thing?
https://en.wikipedia.org/wiki/The_Unbelievable_Truth_(radio_...
I love that BBC radio (today: BBC audio) series. It started before the inflation of 'alternative facts' and it is worth (and very funny and entertaining) to follow, how this show developed in the past 19 years.
But it was a long death struggle, bleeding out drop by drop. Who remembers that people had to learn netiquette before getting into conversations? That is called civilisation.
The author of this post experienced the last remains oft that culture in the 00s.
I don't blame the horde of uneducated home users who came after the Eternal September. They were not stupid. We could have built a new culture together with them.
I blame the power of the profit. Big companies rolled in like bulldozers. Mindless machines, fueled by billions of dollars, rolling in the direction of the next ad revenue.
Relationships, civilization and culture are fragile. We must take good take of them. We should. but the bulldozers destroyed every structure they lived in in the Internet.
I don't want to whine. There is a learning: money and especially advertising is poison for social and cultural spaces. When we build the next space where culture can grow, let's make sure to keep the poison out by design.
The 'Dead Internet' (specifically AI-generated SEO slop) has effectively broken traditional keyword search (BM25/TF-IDF). Bad actors can now generate thousands of product descriptions that mathematically match a user's query perfectly but are semantically garbage/fake.
We had to pivot our entire discovery stack to Semantic Search (Vector Embeddings) sooner than planned. Not just for better recommendations, but as an adversarial filter.
When you match based on intent vectors rather than token overlap, the 'synthetic noise' gets filtered out naturally because the machine understands the context, not just the string match. Semantic search is becoming the only firewall against the dead internet.
1. People who live in poorer countries who simply know how to rage bait and are trying to earn an income. In many such countries $200 in ad revenue from Twitter, for example, is significant; and
2. Organized bot farms who are pushing a given message or scam. These too tend to be operated out of poorer countries because it's cheaper.
Last month, Twitter kind of exposed this accidentally with an interesting feature where it showed account location with no warning whatsoever. Interestingly, showing the country in the profile got disabled from government accounts after it raised some serious questions [1].
So I started thinking about the technical feasibility of showing location (country or state for large countries) on all public social media ccounts. The obvious defense is to use a VPN in the country you want to appear to be from but I think that's a solvable problem.
Another thing I read was about NVidia's efforts to combat "smuggling" of GPUs to China with location verification [2]. The idea is fairly simple. You send a challenge and measure the latency. VPNs can't hide latency.
So every now and again the Twitter or IG or Tiktok server would answer an API request with a challenge, which couldn't be antiticpated and would also be secure, being part of the HTTPS traffic. The client would respond to the challenge and if the latency was 100-150ms consistently despite showing a location of Virginia then you can deem them inauthentic and basically just downrank all their content.
There's more to it of course. A lot is in the details. Like you'd have to handle verified accounts and people traveling and high-latency networks (eg Starlink).
You might say "well the phone farms will move to the US". That might be true but it makes it more expensive and easier to police.
It feels like a solvable problem.
[1]: https://www.nbcnews.com/news/us-news/x-new-location-transpar...
[2]: https://aihola.com/article/nvidia-gpu-location-verification-...
There may be some irony to be found in this human centipede.
I do and so do a number of others, and I like Oxford commas too.
If AI would cost you what it actually costs, then you would use it more carefully and for better purposes.
> What if people DO USE em-dashes in real life?
They do and have, for a long time. I know someone who for many years (much longer than LLMs have been available) has complained about their overuse.
> hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer
Using two dashes for an em-dash goes back to typewriter keyboards, which had only what we now call printable ASCII and where it was much harder add to add non-ASCII characters than it is on your computer - no special key combos. (Which also means that em-dashes existed in the typewriter era.)
Those sound funny; why would they make you sad?
There are many compounding factors but I experienced the live internet and what we have today is dead.
So I go on my province's subreddit. Politicswise, if there was an election today the incumbent politician would increase their majority and may even be looking at true majority. Hugely popular.
If you find a political thread, there will be 500 comments all agreeing with each other that the incumbent is evil, 50 comments downvoted and censored because they dare have an opinion that agrees with the incumbent, 100 comments deleted by anonymous mods banning people for what reason? Enjoy your echo chamber.
Anyone who experiences being censored a few times will just stop posting. Then when the election happens they have no idea at all why people would ever vote that way because they have never seen anyone do anything but agree with their opinion.
What an utterly dead subreddit.
This is the modern epistemic crisis. And wait till Elon implants a brain computer interface in you. You won't even fully trust your eye looking through a telescope.
The Internet has never been dead. Or alive. Ever since it escaped its comfortable cage in the university / military / small-clique-of-corporations ecosystem and became a thing "anyone" can see and publish on, there has forever been a push-pull between "People wanting to use this to solve their problems" and "People wanting eyeballs on their content, no matter the reason." We're just in an interesting local minimum where the ability to auto-generate human-shaped content has momentarily overtaken the tools search engines (and people with their own brains) use to filter useful from useless, and nobody has yet come up with the PageRank-equivalent nuclear weapon to swing the equation back again.
I'm giving it time, and until it happens I'm using a smaller list of curated sites I mostly trust to get me answers or engage with people I know IRL (as well as Mastodon, which mostly escapes these effects by being bad at transiting novelty from server to server), because thanks to the domain name ownership model pedigree of site ownership still mostly matters.
How sick and tired I am of this take. Okay, people are just bags of bones plus slightly electrified boxes with fat and liquid.
Paying creators is the dumbest and most consequential aspect of modern media. There is no reason to reward creators, zero. They should actually be paying Youtube for access to their audience. They actually would pay to be seen, paying them is both stupid and unnecessary. Kill the incentives and you kill the cancer.
Yeah, I especially hate how paranoid everyone is (but rightly so). I am constantly suspicious of others' perfectly original work being AI, and others are constantly suspicious of my work being AI.
If you define social networks as a graph of connections, fair enough - there's no graph. It is social media though.
HN is Social in the sense that it relies on (mostly) humans considering what other humans would find interesting and posting/commenting for for the social value (and karma) that generates. Text and links are obviously media.
There seems to be an insinuation that HN isn't in the same category as other aggregators and algorithmic feeds. It's not always easy to detect but the bots are definitely among us. HN isn't immune to slop, its just fairly good at filtering the obvious stuff.
I mean sure, the next step will probably be "your ads have been seen by x real users and here are their names, emails, and mobile numbers" :(
As well as verification there must be teams at Reddit/LinkedIn/Whereever working ways to identify ai content so it can be de-ranked.
> The Oxford Word of the Year 2025 is rage bait
> Rage bait is defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media content”.
https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...
I am sick of the em-dash slander as a prolific en- and em-dash user :(
Sure for the general population most people probably don't know, but this article is specifically about Hacker News and I would trust most of you all to be able to remember one of:
- Compose, hyphen, hyphen, hyphen
- Option + Shift + hyphen
(Windows Alt code not mentioned because WinCompose <https://github.com/ell1010/wincompose>)
Maybe the future will be dystopian and talking to a bot to achieve a given task will be a skill? When we reach the point that people actually hate bots, maybe that will be a turning point?
I cannot find a place to talk to like-minded people anymore; it's all gamified to sell you something. All folks do on Reddit is talk at you. I'm starting to doubt half the people posting are actual people now... with so many comments that are perfectly formatted and phrased like ChatGPT.
I don't know mind people using AI to create open source projects, I use it extensively, but have a rule that I am responsible and accountable for the code.
Social media have become hellscapes of AI Slop of "Influencers" trying to make quick money by overhyping slop to sell courses.
Maybe where you are from the em dash is not used, but in Queen's English speaking countries the em dash is quite common to represent a break of thought from the main idea of a sentence.
I don’t think LLMs and video/image models are a negative at all. And it’s shocking to me that more people don’t share this viewpoint.
1. There are channels specialized in topics like police bodycam and dashcam videos, or courtroom videos. AI there is used to generate voice (and sometimes a very obviously fake talking head) and maybe the script itself. It seems a way to automatize tasks.
2. Some channels are generating infuriating videos about fake motorbikes releases. Many.
Think of the children!!!
dude, hate to break it to you but the fact that it's your "one and only" makes it more convincing it's your social network. if you used facebook, instagram, and tiktok for socializing, but HN for information, you would have another leg to stand on.
yes, HN is "the land of misfit toys", but if you come here regularly and participate in discussions with other other people on a variety of topics and you care about the interactions, that's socializing. The only reason you think it's not is that you find actual social interaction awkward, so you assume that if you like this it must not be social.
Just absolutely loved it. Everyone was wondering how deepfakes are going to fool people but on HN you just have to lie somewhere on the Internet and the great minds of this site will believe it.
It used to be Internet back when the name was still written in the capital first letter. The barrier to utilize the Internet was high enough that mostly only the genuinely curious and thoughtful people a) got past it and b) did have the persistence to find interesting stuff to read and write about on it.
I remember when TV and magazines were full of slop of the day at the time. Human-generated, empty, meaningless, "entertainment" slop. The internet was a thousand times more interesting. I thought why would anyone watch a crappy movie or show on TV or cable, created by mediocre people for mere commercial purposes, when you could connect to a lone soul on the other side of the globe and have intelligent conversations with this person, or people, or read pages/articles/news they had published and participate in this digital society. It was ethereal and wonderful, something unlike anything else before.
Then the masses got online. Gradually, the interesting stuff got washed in the cracks of commercial internet, still existing but mostly just being overshadowed by everything else. Commercial agenda, advertisements, entertainment, company PR campaigns disguised as articles: all the slop you could get without even touching AI. With subcultures moving from Usenet to web forums, or from writing web articles to posting on Facebook, the barrier got lowered until there was no barrier and all the good stuff got mixed with the demands and supplies of everything average. Earlier, there always were a handful of people in the digital avenues of communication who didn't belong but they could be managed; nowadays the digital avenues of communication are open for everyone and consequently you get every kind of people in, without any barriers.
And where there are masses there are huge incentives to profit from them. This is why internet is no longer an infrastructure for the information superhighway but for distributing entertainment and profiting from it. First, transferring data got automated and was dirt cheap, now creating content is being automated and becomes dirt cheap. The new slop oozes out of AI. The common denominator of internet is so low the smart people get lost in all the easily accessed action. Further, smart people themselves are now succumbing in it because to shield yourself from all the crap that is the commercial slop internet you basically have to revert to being a semi-offline hermit, and that goes against all the curiosity and stimuli deeply associated with smart people.
What could be the next differentiator? It used to be knowledge and skill: you had to be a smart person to know enough and learn enough to get access. But now all that gets automated so fast that it proves to be no barrier.
Attention span might be a good metric to filter people into a new service, realm, or society eventhough, admittedly, it is shortening for everyone but smart people would still win.
Earlier solutions such as Usenet and IRC haven't died but they're only used by the old-timers. It's a shame because then the gathering would miss all the smart people grown in the current social media culture: world changes and what worked in the 90's is no longer relevant except for people who were there in the 90's.
Reverting to in-real-life societies could work but doesn't scale world-wide and the world is global now. Maybe some kind of "nerdbook": an open, p2p, non-commercial, not centrally controlled, feedless facebook clone could implement a digital club of smart people.
The best part of setting up a service for smart people is that it does not need to prioritize scaling.
Reminds me of those times in Germany when mainstream media and people with decades in academia used the term "Putin Versteher" (Person who gets Putin, Putin 'understander') ad nauseaum ... it was hilarious.
Unrelated to that, sometime last year, I searched "in" ChatGPT for occult stuff in the middle of a sleepless night and it returned a story about "The Discordians", some dudes who ganged up in a bowling hall in the 70's and took over media and politics, starting in the US and growing globally.
Musk's "Daddy N** Heil Hitler" greeting, JD's and A. Heart's public court hearings, the Kushners being heavily involved with the recruitment department of the Epsteins Islands and their "little Euphoria" clubs as well as Epstein's "Gugu Gaga Cupid" list of friends and friends of friends, it's all somewhat connected to "The Discordians", apparently.
It was a fun "hallucination" in between short bits on Voodoo, Lovecraft and stuff one rarely hears about at all.
Recently someone accused me of being a clanker on Hackernews (firstly lmao but secondly wow) because of my "username" (not sure how it's relevant, When I had created this account I had felt a moral obligation to learn/ask for help to improving and improve I did whether its in writing skills or learning about tech)
Then I had posted another comment on Another thread in here which was talking about something similar. The earlier comment got flagged and my response to it but this stayed. Now someone else saw that comment and accused me of being AI again
This pissed me off because I got called AI twice in 24 hours. That made me want to quit hackernews because you can see from my comments that I write long comments (partially because they act as my mini blog and I just like being authentic me, this is me just writing my thoughts with a keyboard :)
To say that what I write is AI feels such a high disrespect to me because I have spent some hours thinking about some of the comments I made here & I don't really care for the upvotes. It's just this place is mine and these thoughts are mine. You can know me and verify I am real by just reading through the comments.
And then getting called AI.... oof, Anyways, I created a tell HN: I got called clanker twice where I wrote the previous thing which got flagged but I am literally not kidding but the first comment came from an AI generated bot itself (completely new account) I think 2 minute or something afterwards which literally just said "cool"
Going to their profile, they were promoting some AI shit like fkimage or something (Intentionally not saying the real website because I don't want those bots to get any ragebait attention to conversions on their websites)
So you just saw the whole situation of irony here.
I immediately built myself a bluesky thread creator where I can write a long message and it would automatically loop or something (ironically built via claude because I don't know how browser extensions are made) just so that I can now write things in bluesky too.
Funny thing is I used to defend Hackernews and glorify it a few days ago when an much more experienced guy called HN like 4chan.
I am a teenager, I don't know why I like saying this but the point is, most teenagers aren't like me (that I know), it has both its ups and downs (I should study chemistry right now) but Hackernews culture was something that inspired me to being the guy who feels confidence in tinkering/making computers "do" what he wants (mostly for personal use/prototyping so I do use some parts of AI, you can read one of my other comments on why I believe even as an AI hater, prototyping might make sense with AI/personal-use for the most part, my opinion's nuanced)
I came to hackernews because I wanted to escape dead internet theory in the first place. I saw people doing some crazy things in here reading comments this long while commuting from school was a vibe.
I am probably gonna migrate to lemmy/bluesky/the federated land. My issue with them is that the ratio of political messages : tech content is few (And I love me some geopolitics but quite frankly I am tired and I just want to relax)
But the lure of Hackernews is way too much, which is why you still see me here :)
I don't really know what the community can do about bots.
Another part is that there is this model on Localllama which I discovered the other day which works opposite (so it can convert LLM looking text to human and actually bypasses some bot checking and also the -- I think)
Grok (I hate grok) produces some insanely real looking texts, it still has -- but I do feel like if one even removes it and modifies it just a bit (whether using localllama or others), you got yourself genuine propaganda machine.
I was part of a Discord AI server and I was shocked to hear that people had built their own LLM/finetunes and running them and they actually experimented with 2-3 people and none were able to detect.
I genuinely don't know how to prevents bots in here and how to prevent false positives.
I lost my mind 3 days ago when this happened. Had to calm myself and I am trying to use Hackernews (less) frequently, I just don't know what to say but I hope y'all realize how it put a bad taste into my mouth & why I feel a little unengaged now.
Honestly, I am feeling like writing my own blogs to my website from my previous hackernews comments. They might deserve a better place too.
Oops wrote way too long of a message, so sorry about that man but I just went with the flow and thanks man for writing this comment so that i can finally have this one comment to try to explain how I was feeling man.