I'd heard of beads as a lightweight issue tracker for agents, so this gave me a real shock. What could all that code POSSIBLY be doing? Going to the repo and poking around, I truly cannot tell. There's an enormous `docs/` folder with no hierarchy, containing files like `MULTI_REPO_HYDRATION.md`, which "describes the implementation of Task 3 from the multi-repo support feature (bd-307): the hydration layer that loads issues from multiple JSONL files into a unified SQLite database," and `ANTIVIRUS.md`, a 7KB text file about how `bd.exe` sometimes gets flagged as untrustworthy by antivirus software.
I opened a random go file, `detect_pollution.go`. This is a CLI command for detecting and cleaning up test tickets from a production database by (1) scanning ticket titles for testing-related prefixes like "debug," "test," or "benchmark," (2) scanning for short descriptions, (3) scanning for suspicious phrases like "sample ticket," and (4) scanning for batches of tickets that were created all at once. It uses these signals to compute a confidence score for each ticket that determines whether it should be deleted. This command was deprecated and replaced by `doctor_pollution.go`, which reimplements large parts of `detect_pollution.go` and is not, at a glance, substantially different. Two seconds of thought will tell you that this feature is unnecessary, since you can create tickets with a "#test" tag and then delete them by tag.
I don't want to come across as mean, but Steve should be embarrassed by this. It's grotesquely baroque and completely unmaintainable—proof positive that whatever he's doing isn't working.
I have been volunteering as an advisor to various master's and PhD theses, giving feedback on theses texts and papers. I see people using AI to write their texts more and more, and I feel like my hours are now wasted on improving AI-generated texts instead of helping people hone their writing and thinking skills. Since I cannot constantly analyze and think about who actually wrote the texts, I am thinking about stopping my volunteering.
In the end, the biggest difference between the enthusiasts and the skeptics might be “do you enjoy talking to robots.” The rest is downstream of whether you find endless prompting fun or annoying.
All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they’ve never been more productive
... okay, I'll bite. What is actually being made here?These people are so productive, running 10 checkouts of a repo with Claude or whoever... Code must be flying out. I'm sure github is seeing a rise in lines pushed faster than ever.
I am not seeing an explosion of products worthy of any cents out of this, though, at least nowhere near what is being evangelised by the "trust me bro, we're productivity gods now".
Where is the output of all these tokens going, when you wake up the next morning?
I've used AI quite a lot. Enough to know that an inference state machine is an inference state machine.
I want to see it, I want to believe! Show me the goods! Stop telling everyone how productive you are and show the finished work.
At least the post seems to be rightfully conclusive that people are going to go _insane_.
Vibecoding slop every night, waking up the next morning, starting again, and again. Without any meaning or end; I suspect these people will quit and move on to something else. I've been programming, probably averagely, for over 25 years -- because I like computers -- not because I like being a productivity junkie, shooting on dopamine.
Make it count.
In The Resilient Farm and Homestead by Ben Falk, you'll find a sidebar on page 28: Oil to Soil--Use it or Lose It: Leveraging the Cheap-Oil Window for Maximum Effect. He says, "we have made the conscious decision to take advantage of the small window of time still remaining with which to develop intergenerational land and infrastructure systems, which greatly enables long-term production of the site without any oil input for hundreds if not thousands of years."
I think of subsidized LLM tokens like this. Use them to build developer tools. Ideally, these developer tools will work with and without further LLM use. Then it won't matter if token prices fall forever, or if the subsidies end and nobody can afford AI-assisted development.
https://x.com/gdb/status/2013164524606775544?s=61 But who is gonna tell him.
I don’t understand what these CEO are up to.
Now everyone’s a DJ https://www.youtube.com/live/wc5j-HK4NS8
Hearing we’re geniuses all the time can create slop loops.
If we fail to be circumspect about a problem, to think through the implications of our decisions, we’ll produce thoughtless slop.
We have to avoid the dopamine high of “velocity” and take our time and ensure we remember all the real constraints for our problem.
> I'm not sure how we will go ahead here, but it’s pretty clear that in projects that don’t submit themselves to the slop loop, it’s going to be a nightmare to deal with all the AI-generated noise.
> Some projects no longer accept human contributions until they have vetted the people completely.
Also reminds of the following recent piece that talked about increasing (or exploding?) verification debt:
https://cacm.acm.org/blogcacm/verification-debt-when-generat...