Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.
It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.
- If you'd invested in Bitcoin in 2016, you'd have made a 200x return
- If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now
- If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
Of course, you could just as well have
- become an ActionScript specialist as it was clearly the future of interactive web design
- specialized in Blackberry app development as one of the first mobile computing platforms
- made major investments in NFTs (any time, really...)
Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...
So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?
I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.
Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.
AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.
Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.
In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.
Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.
EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.
But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.
Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!
But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".
Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
This really hinges on what you mean by "didn't use git".
If you were using bzr or svn, that's one thing.
If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.
Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.
I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.
For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.
Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.
Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?
I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.
Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.
This is a great framing.
There's a tough timing call that comes into play with this because there is no ability to predict, only to look in hindsight, it's more of a hedge to be in and say I bought a lottery ticket and had a chance during the AI boom than it is to not engage.
I believe the saying is "you never win if you never play."
Nothing is happening. And if it is, it's just hype.
And if it isn't, it only works on toy problems. And if it doesn't, I'll learn it when it stabilizes.
And if I can't, the gains all go to owners anyway. And if they don't, it's just managers chasing metrics.
And if it isn't, well I'm a real programmer. And if I'm not, then neither are you.
A practitioner with more experience maybe a few percentage points more productive, but the median - grab subscription, get tool, prompt, will be mostly good enough.
In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.
I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.
Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"
The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.
Speaking as an IC who is both more productive than last year, but simultaneously more worried.
I was developing in the metaverse space, and the problems we were facing led me to learn about the state of AI image generation (2018), and where the world was headed.
People assume the thing you are focused on is the thing that must win in the end, but that just means you are too focused on your little part of the world to take in the bigger picture of what's out there.
Prior to working in the metaverse (really a form of volumetric video, but I won't go into details), I was working in telehealth (2014). I did some research in augmented reality (2009), and lots of other areas of interest as well.
Some people would say I wasted my time on these, but there was a mass of secondary learnings which I value every day.
Sadly, I'm still disagreeing while crypto kiddies are driving past me in lambo's. If its the future of money, yes we'll get there eventually, but like every technology shift, there's a lot of money to be made in the transition, not after. *
* I sold all crypto a few years ago and I'm a happier person :D
Writing the actual code that's efficient is iffy at times and you better know the language well or you'll get yourself in trouble. I've watched AI make my code more complex and harder to read. I've seen it put an import in a loop. It's removed the walrus operator because it doesn't seem to understand it. It's used older libraries or built-ins that are no longer supported. It's still fun and does save me some time with certain things but I don't want to vibe code much because it removes the joy out of what you're doing.
> Few are useful to me as they are now.
Except current AI tools are extremely useful and I think you're missing something if you don't see that. This is one of the main differences between LLMs and cryptocurrency; cryptocurrencies were the "next big thing", always promising more utility down the road. Whereas LLMs are already extremely useful; I'm using them to prototype software faster, Terrance Tao is using them to formalize proofs faster, my mom's using them to do administrative work faster.
Wonderful life lesson on hype cycles. I am curious if hype literacy will join media literacy in academia.
i'll just say, and i understand this is not the point of the article at all, but for all its faults, if you got in on flash as earl as html 2.0 and you were staring at an upcoming dead-end of flash in say, 2009, you also knew or had been exposed at that time to plenty of javascript, e4x and what were essentially entirely clientside SPAs, providing you a sort of bizarro view of the future of react in a couple of years. honestly, not a bad offramp even if flash itself didn't make it.
But AI is a beast.
Its A LOT to learn. RAG, LLMs, Architecture, tooling, ecosystem, frameworks, approaches, terms etc. and this will not go away.
Its clear today already and it was clear with GPT-3 that this is the next thing and in comparison to other 'next things' its the next thing in the perfect environment: The internet allows for fast communication and we never have been as fast and flexible and global scaled manufactoring than today.
Which means whatever the internet killed and changed, will happen / is happening a lot faster with ai.
And tbh. if someone gets fired in the AI future, it will always be the person who knows less about AI and knows less about how to leverage than the other person.
For me personally, i just enjoy the whole new frontier of approaches, technologies and progress.
But i would recommend EVERYONE to regularly spend time with this technology. Play around regularly. You don't need to use it but you will not gain any gut knowledge of models vs. models and it will be A LOT to learn when it crosses the the line for whatever you do.
At any moment, you are failing at thousands of things that you may not even know about, and that is the gist of what I took away from it. The thing is that you have to be OK when you intentionally choose to not invest in something as regret is ultimately a poison.
The other thing is this: you are not obligated to bring people with you and you have a choice of free association.
No, they are not.
The risk of getting in early on crypto is you lose a little money. The risk of not is missing out on money. You can't simply replay that later, the way that you could invest the time to catch up on how git works.
I mean if you did that you'd have contributed ~$38K USD by 2026 and had ~1.5B USD now if you started in 2010. BTC being so cheap back then dominates the whole process so to demonstrate my point more if you had heard about it all those years and were nervous about trying it and decided to wait until 2016, you'd still need to just put in $24K overall to come out with ~$450K by 2026.
That's not biting your finger nails over the price changes, the hype cycles, the price drop scares. You just set and forget a $200 recurring buy a month and put your energy elsewhere and pocket half a million for basically no effort
And if anything is possible in hindsight, then why in hindsight would you write an article acting like bitcoin was a bad decision to be an early adopter for
That said, my only regret with Bitcoin was deleting my early wallets when I realized the coins were only worth $.25 ... if I'd had any inkling what they'd be worth someday, I'd probably have just bought $1000 worth back then and zipped it up until closer to today. I'm truly curious how many bitcoins were similarly deleted from existence.
It also shows a passion for learning and improvement, something hiring managers are often looking for signals of.
But of course it's a trade off. This rewards people who don't have family or other obligations, who have time to learn all the new fads so they can be early on the winners.
When I eventually got around to using Rust, I was hooked, and now I don't use C++ anymore if I can choose Rust instead. The hype was not completely unjustified, but it was also misplaced, and to this day I disagree with most of those hype projects.
It was no issue to silently pick up Rust, write some code that solves problems, and enjoy it as a very very good language. I don't feel a need to personally contact C or C++ project maintainers and curse at them for not using Rust.
I do the same with AI. I'm not going around screaming at people who dare to write code by hand, going "Claude will replace you", or "I could vibe code this for 10 bucks". I silently write my code, I use AI where I find it brings value, and that's it.
Recognize these tools for what they are: Just tools. They have use-cases, tradeoffs, and a massive community of incompetent idiots who like it ONLY because they don't know better, not because they understand the actual value. And then there's the normal, every day engineers, who use tools because, and ONLY because, they solve a problem.
My advice: Don't be an idiot. It's not the solution for all problems. It can be good without being the solution to a problems. It can be useful without replacing skill. It can add value without replacing you. You don't have to pick a side.
All I know is, I've always enjoyed building things. And I enjoy building things with AI-assisted tools too, so I'll continue doing it.
I saw meme in X the other day, which roughly says that one does not have to learn if she learns slow enough in the age AI. I guess the undertone is that AI evolves faster than one can learn about the tricks of using it.
It is a skill, but not a special AI specific skill.
That's does not obviously follow, I do worry about the ever increasing proportion of humanity who are no longer 'economically viable' and this includes people who are not yet born.
But on the other hand... I also only learned git when I needed it at a new job... So we can pump the breaks a bit.
I was highly skeptical of this happening not that long ago, but I have to say that it seems increasingly likely. LLMs are still quite mediocre at esoteric stuff, but most software development work isn't esoteric. There's the viable argument that software development largely isn't about writing code, but the ability to write code is what justifies software developer salaries, because there's a large barrier to entry there that most just can't overcome. The 80/20 law seems to apply to everything, certainly here - 80% of your salary is justified from 20% of what you spend your time doing.
It's quite impossible to imagine what this will do to the overall market, because while this sounds highly negative for software developers, we're also talking about a future where going independent will be way easier than ever before, because one of the main barriers for fully independent development is gaps in your skillset. Those gaps may not be especially difficult, but they're just outside your domain. And LLMs do a terrific job of passably filling them in.
It'd be interesting if the entire domain of internet and software tech plummets in overall value due to excessive and trivialized competition. That'd probably be a highly disruptive but ultimately positive direction for society.
This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.
The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.
The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.
Chasing every new tech will lead to burnout and disillusionment at some point.
AI probably isn't going away in the same way NFTs largely did, and I use it to some degree. However, I don't see a lot of value of being on the bleeding edge of AI, as the shape it takes for those skills that will be used for the next 10 years are still forming. Trying to keep up now means constantly adapting how I work, where more time is spent keeping up on the changes in AI than actually doing something useful with it.
After the bubble pops, I think we'll start to see a much more clear picture of what the landscape of AI will look like long-term. Who are the winners, who are the losers, and what tools rise to the top after the hype is gone. I'll go deeper at that time.
Right now, the only thing I'm allowed to use at work is Copilot, so I just use that and don't bother messing around with much more in my free time.
A ton of it is garbage. But 1/1000 gets to realize a vision, that could never be marketed, never be discovered and is something new entirely. For that i love it and thank those who made it..
I am actually surprised by people willingly trying to be more productive, like... machines. And then crying when machines are proven to be better at being machines than meatbags.
(I'm not the earliest adopter of crypto and AI by any means. I only rode up crypto a couple of times for 2X and 3X kinda gains on my investment, and I only started using Claude last year.)
It is said that major providers more than break even on what they're charging.
But at the same time that's not the point of capitalism, is it? The point is to charge close to the value you're providing.
My lunch money is approximately $10 and I often blow through as much in Claude tokens generously provided by the company which hired me. But I'm not getting $10 value from those tokens, but much more.
The cost of entry to this market is extremely high. Should Anthropic win and become an almost monopoly, it is bound to keep increasing prices to the point, where the value it's providing matches the cost.
That's the endgame of every AI company out there. It's worth using these tools now, while there's still competition and moats weren't established.
Employment?
This line, as one example:
> For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.
Like a lot of tech Flash had its moment in the sun and then faded away, but that “moment” lasted a decade, and plenty of people got their start because of or built successful businesses around it. Did they have to pivot as Flash waned? Sure, but change is part of life.
I’m sorry but I find the take expressed in this piece to be absolutely miserable and uninspiring.
But, hey, congratulations on the 20:20 hindsight, I suppose.
I do think it's a bad take though. Not all new trends are the same: the metaverse was an obvious flop and crypto hasn't found practical applications. AI isn't like those because it's already practically changed the way I get my job done.
It takes time to learn skills, and getting started earlier will means more time to use them in your working life.
I made these kind of mistakes early in my career, stuck it out with PHP for far too long ignoring all the changes with frontend design trends, react, etc. I was using jQuery far too late in my career and it really hurt me during interviews. What I was doing was seen as dated and it made ageism far worse for me.
Showing a portfolio website that was using tables instead of divs.
I had to rapidly skill up and it takes longer than you think when you stick too long with what works for you.
If AI truly is a nothing-burger than guess what? Nothing lost and perhaps you learned some adjacent tech that will help you later. My advice is to NEVER stop learning in this field.
Learning is your true superpower. Without that skill, you are a cog that will be easily replaced. AI has revealed to me who among my colleagues is curious, and a continuous learner. Those virtues have proven over the course of my 25+ year career in technology to be what keeps you relevant and marketable.
> It is 100% OK to wait and see if something is actually useful.
> I took part in a vaccine trial
> Getting Jabbed With EXPERIMENTAL SCIENCE!
This is such a weird article. The author presents so many contradictory anecdotal experiences against the author's own conclusion.
You're trying to make the point using BitCoin, but in the early 2000s I had just over 14,000 of them, so I can quite clearly see a point in getting in early.
The only scenario where I think it pays off to be on top of the hype is of you are chasing money sloshing around the latest hype. You know, the hustle culture thing. If that's not your thing, waiting until things are established (if they ever get there) is harmless.
And yeah, AI as it is now is at best moderately useful. I use it on a daily basis, but could do without it with little harm.
it’s readily apparent who has bought into the llm hype and who hasn’t
This is the lazy guy path, is not the wise one.
Of course those that believe that AI will convert into AGI and destroy society as we know it won't be convinced.
When AI will be easy to pick up and guide, guess what, there will be no need for a programmer to pick it up. AI will be using itself, Claude Manager driving Claude programmers.
So leverage AI while you still can provide value doing so.
It's literally a "use it or lose it situation".
I'm glad I jumped early on: Linux, Python, virtualization, cloud, nodejs, Solana.
I wish I'd gotten into Rust and LLMs earlier.
I mean... yeah? It's obviously true. However people use LLM coding today not because they're "afraid of being left behind" or "investing into a new tech" or whatever abstract reasoning. It's because they're already reaping the benefit right away. It takes just a few hours to go through like 80% of the learning curve.
This is more on the scale of the invention of the press, the telegraph, or the internet itself.
"I'm ok being left behind, I will join this Internet thing when it really becomes useful"...
Ok... you do you. Hope you don't get there too late.
These companies are paying for the privilege of having their IP stolen.
LLMs, at the moment, are all about giving up your own brain and becoming fully dependent on a subscription-based online service.
IMO it reads a little desperate and very much like the hype bros but from opposite side. Take a look at the articles if you don't believe.
https://shkspr.mobi/blog/tag/ai/
- I'm OK being left behind, thanks!
- Unstructured Data and the Joy of having Something Else think for you
- This time is different
- How close are we to a vision for 2010?
- AI is a NAND Maximiser
- Reputation Scores for GitHub Accounts
- Agentic AI is brilliant because I loath my family
- Stop crawling my HTML you dickheads - use the API!
- Removing "/Subtype /Watermark" images from a PDF using Linux
- LLMs are still surprisingly bad at some simple tasks
- Books will soon be obsolete in school
- Winners don't use ChatGPT
- Grinding down open source maintainers with AI
- Why do people have such dramatically different experiences using AI?
- Large Language Models and Pareidolia
- How to Dismantle Knowledge of an Atomic Bomb
- GitHub's Copilot lies about its own documentation. So why would I trust it with my code?
- LLMs are good for coding because your documentation is shit
Not going to lie, I’d rather be poor. Not destitute - I’ve been poor but not destitute and I’d rather not go desperate - but poor? As in (because “poor” is very imprecise and can imply anything between utter poverty to “not owning three homes”) like having a low paying job but still enough to pay rent?
I’d rather be that than do AI assisted software development. Genuinely the only thing stopping me now is that there’s actually way more skill and qualifications in most low-paying jobs than a typical software developer imagines, and acquiring those takes time and money itself. But by now I know multiple people who made the jump even before the latest madness, and they’re all happier. Some still code, but don’t even publish. Some are like “I haven’t used a proper computer in _months_ this is great.” All work hard jobs at odd hours. None regret.
Then of course, there are MANY problems starting with BTC https://blog.dshr.org/2025/09/the-gaslit-asset-class.html but they generally aren't what most people talk about, and today, even if the complexity is insane, the documentation sparse, and the code quality questionable, Lightning is actually the only truly scalable solution we have for micropayments and payments of various amounts, generally not large ones. It has some absurd aspects, but that hardly matters. It works.
More generally, we don't have a crystal ball; where we can, we diversify, certainly limiting risk but also taking a bit of a gamble; where we can't, we choose to watch and see how it goes, knowing that we'll pay for it in terms of returns.
As things stand for me personally, given the level of IT obscenity in the traditional banking world, which in 2026 still doesn't have decent APIs open to retail customers, or at least decent export functions, shameful websites, a push towards completely unacceptable mobile apps, absurd limits etc etc etc, well, the worst CEx is less bad than the best bank. I don't trust either, but I distrust the CEx less than the bank, and given Wikileaks, Francesca Albanese, the protesters in Canada, the various private individuals illegally "sanctioned" by the EU Commission and so on, I'd say it's madness to rely on banks for anything more than the bare minimum.
Most people today know nothing of this and don't weigh it up, but they will, and they'll pay for this delay with their lack of attention.
In general, we as a society have not adjusted to technology. We've gone through to much change to have any stable base lines. So we're going to float in insanity for a while until things finally settle down. Probably 2 wars, a famine, and several periods of resource scarcity away still, but we'll get there one day...
Did handmade Swiss watch movements lose all demand when Asia started mass manufacturing watches? No. There is always going to be more demand for quality over slop. Its the same reason that handmade clothes are worth 100x more than clothes at a department store.
This is all by design too, these billionaires selling thinking machines are trying to make us all dependent on their fountain of tokens. Don't fall for it. Just like how maps apps made everyone reliant on Google/Apple for your ability to navigate around your own city, these billionaires want to do the same think with your ability to think, build, plan and even learn/read.
Don't fall for this scam, unlike other hype cycles like NFTs and Crypto this will actually damage more than just your bank account, it will fry your brain if you become over reliant on it.
Take a second and consider why these LLM tool companies design their products like slot machines. They put multipliers in there UIs (run this x3,x4,x5) times so that you inevitably treat the thing like a slot machine. And it is like a slot machine, you have no way to control the results its quite random, in the case of llms they just have a better payout percentage, at the cost of making your brain become dopamine and structurally dependent on their output. They convince people there is some occulted art in the formation of a prompt, like a gambler who thinks if they press buttons in a certain order they'll get better results or many other gambling superstitions.
If you're writing software please take a moment to breath, and ask yourself if its really that useful to have piles of code where you have little idea how things work, even if they do. Billionaires will sell you on the idea that this doesn't matter because the llm, that you conveniently have to pay them to use, will always be able to fix that bug.
Don't fall for the ruling classes trick, they want you reliant on this thing so they can tell you that your input isn't as valuable, and therefor your salary and skills are not as valuable. We have to stop this now.
When Maps apps came around, people totally lost the brain muscle for being able to navigate. Using LLMs is no different, people over reliant on these tools are simply ngmi. They are going to be totally reliant on their favorite billionaire being willing to sell them competency via their thinking machines.
I would caution everyone to consider if the Billionaires who are screaming that you're going to be left behind, laid off and redundant if you don't (pay them to) use their brain nerfing machine, whether or not they have your best interest at heart.
You're not going to be left behind.
These only really happen in mature codebases tied up in complex business requirements.
The last few times I’ve tried LLMs with this codebase it has not been fruitful.
Weird because it’s impressive in other areas, especially tech with no real users lmao
Why not simply evaluate things instead of ignoring them until its too late?
Sure, we don't have infinity time, but the fact that OP mentions these two things, means the pattern showed up enough.
What jobs aren’t requiring usage of these tools by now?
I remember when React was the hotness and I was still using jQuery, I didn't learn it immediatley, maybe a couple years later is when I finally started to use React. I believe this delayed my chance in getting a job especially around that time when hiring was good eg. 2016 or so.
With vibe-coding it just sucks the joy out of it. I can't feel happy if I can just say "make this" and it comes out. I enjoy the process... which yeah you can say it's "dumb/waste of time" to bother with typing out code with your hands. For me it isn't about just "here's the running code", I like architecting it, deciding how it goes together which yeah you can do that with prompts.
Idk I'm fortunate right now using tools like Cursor/Windsurf/Copilot is not mandatory. I think in the long run though I will get out of working in software professionally for a company.
I do use AI though, every time I search something and read Google's AI summary, which you'd argue it would be faster to just use a built in thing that types for you vs. copy paste.
Which again... what is there to be proud of if you can just ask this magic box to produce something and claim it as your own. "I made this".
Even design can be done with AI too (mechanical/3D design) then you put it into a 3D printer, where is the passion/personality...
Anyway yeah, my own thoughts, I'm a luddite or whatever