There are a lot of contexts where I can understand arguing against LLM use (even in cases where I might not entirely agree, I can understand the objection), but this is not one of them.
Don't think this will improve your life? Great, don't do it. But this is also the most classic case of "don't yuck other people's yum". If someone tells you they used an LLM to make some piece of software for themselves and they like it or that it improved some aspect of their life or workflow, what on earth is gained by trying to convince them that no, actually, it's not good and isn't improving their lives and is in fact garbage?
For me, part of creating "perfect" software is that I am very much the one crafting the software. I'm learning while creating, but I find such learning is greatly diminished when I outsource building to AI. It's certainly harder and perhaps my software is worse, but for me the sense of achievement is also much greater.
I find myself scratching real itches that would otherwise have gone left un-scratched, because the hurdle to just getting started on something was too damn high.
For example I had need for an image contact-sheet. I'm sure there exist a lot of contact sheet generators out there, but I was able to just as quickly get claude to write me a set of scripts that took a bunch of raw images, resized them down to thumbnails, extracted their meta-data, and wrote a PDF (via Typst) with filenames and meta-data, in date order.
I got lost perfecting it, hand-picking fonts etc, but it still only took an hour or so from start to finish.
It's perfect for my need, I can customise it any time I want by simply asking claude to modify it.
Did I need to be a developer to do that? Arguably yes, to know the capabilities of the system, to know to ask it to leverage image-magick and typst, to understand what the failure-modes looked like, etc.
But I dind't need to be programmer, and over time people like the OP will learn the development side of software development without learning the programming side.
And that's okay.
Overall both are net positives, I have some nice wood furniture and also a $7 Lack bedside table, and of course I rely on some industrial long term software (Linux e.g.) but almost every day vibe code some throwaway thing for a specific task.
I never did it because I imagined the pain of supporting every device or screen size, or dealing with someone who wants to know why their gift stopped working 6 months later.
The gains I’ve seen from LLM code - making me personally more productive in languages I’ve never used before - don’t erase the support burden so I think I’d still avoid this.
Still, I wonder if soon ordinary people will find it easy enough to make software for their own amusement (not just us nerds doing side projects to stay current), and will my job ever morph into being a software “mechanic“ who is paid to fix what someone else built? Not just “someone else working at the company who owns the software”, but a different company or individual entirely?
Will software maintenance become the job that big industry stops wanting to take because it’s so cheap to write something new that they’ll never fix this year’s model?
Or is software maintenance being democratised by LLMs such that a corner software shop could realistically offer maintenance for this one copy of a piece of software on this one device that the customer brings in?
I think we’ve never discussed a “software right to repair” because changing software is expensive, but we might see that change.
> The Extended Mind Theory argues that our tools are not just accessories, but literal extensions of our cognitive process. Viewed this way, a generic tool like a one-size-fits-all app, feels like a prosthetic that doesn’t quite fit.
Three years back now, my wife and I were planning a two-week long trip and found it really difficult to simply move day-places around (e.g. shift a whole day in a schedule with all of the places planned for the day) as we were planning a multi-city route.We started with Google Sheets (way too cumbersome), then Docs (cumbersome in a different way), then a simple app using Firebase + the Google Maps embedded API built over a weekend, and then ended up building a full blown planning app and eventually a Chrome extension[0] that hooks directly into Google Maps (our preferred tool for exploring).
We are meticulous planners so I totally get the author's sentiment here. Many people see the app the first time and feel overwhelmed, but for us, it's hard to imagine using other tools now because this one fits "just right" having been built specifically for our planning process.
[0] For anyone interested: https://chromewebstore.google.com/detail/turasapp/lpfijfdbgo...
Otherwise, there is too much you have to do right before you have a suitable software base to start building your extra personalized features on. Building on existing open-source software (not designed to be extended on) isn't great either because you would need to merge any changes from the original software into your fork, as opposed to a purpose-built SDK that would better tolerate plugins on different base software versions.
I'm working on this for gaming but the idea is really applicable to any kind of software, if the goal is to allow people to easily create and run personalized versions of them without as much effort and chance for things to go wrong.
When I saw "Perfect Software" in the title, I thought it referred to Perfect Software, the developer who produced the Perfect Writer word processor, Perfect Calc spreadsheet, and Perfect Filer database. These were a suite of office software products developed in the early '80s for CP/M and MS-DOS computers.
However, I'm skeptical about AI, because what I've understood about agentic processes is more about cheap dopamine.
When it comes to medium-sized software development (over 50k LOC), there is much less fun and much more pain, because a growing codebase doesn't allow you to make new features easily.
I believe it is important not to mix up a dose of dopamine gotten from agentic results, as in the article, with achievement from longstanding work, even if it's not so attractive from a short-term perspective.
There is so much software out there, written by people who wanted to solve their particular problem, just like you. Chances are that some of it will fit your needs, and, if the software is flexible enough, will allow you to customize it to make that fit even better.
This is why the Unix philosophy is so powerful. Small independent programs that do one thing well, which can be configured and composed in practically infinite number of ways. I don't need to write a file search or sorting program, nor does the file search program need to implement sorting. But as a user, I can compose both programs in a way to get a sorted list of files by any criteria I need. This is possible without either program being aware of the other, and I can compose programs written decades ago with ones written today.
You can extend this to other parts of your system as well. Instead of using a desktop environment like GNOME, try using a program that just manages windows. Then pick another program for launching applications. And so on. This is certainly more work than the alternative, but at the end of the day, you feel like you are in control of your computer, instead of the other way around.
We see LLMs as a huge opportunity here, to self define.
And existing software as too limberous & weighty.
But there are so many other dimensions and levels of how and why the past hasn't let us situate our software and us together effectively. The architecture of software doesn't have this grow-in-ability to it.
I love the Home-cooked Software and Barefoot Developers material. But neither of those ideas nor perfect software nor audience of one actually appeal to me that strongly. They are all very positive enormous breaks from bad software and bad times where we didn't have basic liberty over systems. But they all strike me as valorizing a particularly isolated rejectionist view of software, that ultimately is rude to the protocols & affordances building that a good healthy and connected form of software that we might and perhaps SHOULD aspire to.
But anything unjamming is from the inflexible unarticulated illegible mess of systems we can at best endure today is doing great work. Many positive steps into greater beyonds out of bad tar pits. 2025 has amazing hope amid all this.
I have years of experience, but I never had the time (or will) to take on some _very minor nuisances_ or different areas of dev far from my day job expertise.
LLMs solved this. I produced about 12 different things that "I needed" to improve aspects of my life.
Each single took between a few hours to 3 days, and most of them I use daily, they are the most used applications (mobile, desktop and web) for my family.
It is a game changer.
Personalized custom software would never really reach critical mass, LLM enabled it, this is the age of personalized software, egosoftware, llmware.
But as for today, have we all just collectively decided to pretend that the LLMs we have are capable of writing good software?
I use LLMs a lot in my workflow, I probably spend a whole day per week learning and fiddling with new tools, techniques, etc. and trying to integrate them in all sorts of ways. Been at it for about a year and a half, mainly because I’m intrigued.
I’m sorry but it still very much sucks.
There are things it’s pretty good at, but writing software, especially in large brownfield projects, is not one of them. Not yet, at least.
I’m starting to believe many are just faking it.
It's never required LLMs. In fact, I think the idea that "LLMs allow us to write software for ourselves" borders on missing the point, for me at least. I write software for myself because I like the exploratory process .. figuring out how do do something such that it works with as little friction as possible from the side of the user; who is of course myself, in the future.
I like nitpicking the details, getting totally side-tracked on seemingly frivolous minutiae. Frequently enough, coming to the end of a month long yak-shave actually contributes meaningful insight to the problem at hand.
I guess what I'm trying to say is "you're allowed to just program .. for no other reason than the fun of it".
As evidence for my claims: a few of my 'perfect' projects
https://github.com/scallyw4g/bonsai
Which is ironic considering the subject matter. “Perfect”, but artificially constructed. “Just for me”, but algorithmic slop.
I agree that you can do so much more custom tailoring of bespoke software with the speed an LLM brings. But something inside of me still revolts at calling this anything other than “convenient”.
“Perfect” I will reserve for things I’ve made myself. However “imperfect” they may really be.
And as a learning tool, it’s extraordinary. Not because it replaces understanding, but because it accelerates it: you can explore unfamiliar domains, compare approaches, and iterate with feedback that used to take days or weeks.
The responsibility to think, judge, and decide still sits entirely with the developer.
How do we work together when we all have our own unilateral views of software?
We know, your blog is the first static webpage in existence.
Third, it brings back autonomy.
This is the new talking point. Musk claims that cars that are always connected provide "autonomy", vibe coders claim that the stolen code distributed by Anthropic provides "autonomy".
War is peace, freedom is slavery.