We have seen so many massive changes to software engineering in the last 30 years that it is hard to argue the clear utility of any specific topic or tool. When I first started it really mattered that you understood bubble sort vs quicksort because you probably had to code it. Now very few people think twice about how sort happens in python or how hashing mechanism are implemented. It does, on occasion, help to know that but not like it used to.
So that brings it back to what I think is a fundamental question: If CS topics are less interesting now, are you shifting that curiosity to something else? If so then I wouldn't worry too much. If not then that is something to be concerned about. So you don't care about red black trees anymore but you are getting into auto-generating Zork like games with an LLM in your free time. You are probably on a good path if that is the case. If not, then find a new curiosity outlet and don't beat yourself up about not studying the limits of a single stack automata.
I agree. The nature of the machine, is to crush the artisanry and joy from the task. However, you can't beat it, so…
I use the miserable things as "research accelerators." I have neither the time, nor the capacity to sustain the BAC necessary, to parse all of the sources and documentation of the various systems in which I'm liable to take interest. I very rarely ask them to "do ${task} for me," but rather: "What is the modern approach to ${task}? And, how do I avoid that and do ${task} in the spirit of Unix?” "Has anyone already done ${task} well?" "Are there any examples of people attempting ${task} and failing spectacularly?"
If you treat it like your boss, it'll act like your boss. If you treat it like your assistant, it'll act like your assistant.
Edit: derp.
If you haven't learned the fundamentals, you are not in a position to judge whether AI is correct or not. And this isn't limited to AI; you also can't judge whether a human colleague writing code manually has written the right code.
Either the AI doesn’t understand them, and you need to walk it down the correct path, or it does understand them, and you have to be able to have an intelligent conversation with it.
Either way, if you want to talk with an LLM on the same level, you're going to need to train on the same dataset.
Simon wrote an article "What is agentic engineering?" [1]
> Now that we have software that can write working code, what is there left for us humans to do? > The answer is so much stuff. > Writing code has never been the sole activity of a software engineer. The craft has always been figuring out what code to write. Any given software problem has dozens of potential solutions, each with their own tradeoffs. Our job is to navigate those options and find the ones that are the best fit for our unique set of circumstances and requirements
Such navigations may require various skills. For example: people/product skills (e.g customer empathy) to determine what to build, or engineering skills (e.g optimizations). Please be open for learning and get stronger via feedbacks.
[1]. https://simonwillison.net/guides/agentic-engineering-pattern...
AI still needs some lucky wizards with CS skills that will keep it going, at least until Skynet gets turned on.
AI tools still don't care about the former most of the time (e.g. maybe we shouldn't do a loop inside of loop every time we need to find a matching record, maybe we should just build a hashmap once).
And I don't care if they care about the latter.
I'd say my ability to write code has stayed about the same, but my understanding of what's going on in the background has increased significantly.
Before someone comes in here and says "you are only getting what the LLM is interpreting from prior written documentation", sure, yeah, I understand that. But these things are writing code in production environments now are they not?
https://bun.com/blog/behind-the-scenes-of-bun-install
Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?
Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.
And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.
Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.
I see over and over those with the deeper understanding are able to drive the AI/LLM code generation processes faster and more effectively, and build things that can be built on by others without hitting hard bottlenecks.
The less people understand CS fundamentals the faster they his a blockade of complexity. This is not necessarily bad code, but sloppy thinking. And CS fundamentals are information and logical processing fundamentals.
It is the Centaur issue. You need to help provide the evaluation and framing for the AI/LLM to search out the possibilities and well known solutions, and code up the prototypes. Without the fundamentals you have to rediscover them slowly after you already hit the hard problems and pause for days or months while trying to work you way around them.
Things like cycle times of instructions, pipeline behavior, registers and so on. You had to, because compilers weren‘t good enough. Then they caught up.
You used to manage every byte of memory, utilized every piece of underlying machinery like the different chips, DMA transfers and so on, because that‘s what you had to do. Now it‘s all abstracted away.
These fundamentals are still there, but 99,9% of developers neither care nor bother with them. They don’t have to, unless they are writing a compiler or kernel, or just because it‘s fun.
I think what you‘re describing is also going to go away in the future. Still there, but most developers are going to move up one level of abstraction.
Maybe you mean “AI tools are making me lose interest in learning anything”, which is… a common reaction, I suppose.
Knowledge builds on knowledge. We learn basic math before advanced math for a reason. The pyramid keeps accumulating from what came before. Understanding the fundamentals still matters, I think.
That'll always be useful.
What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.
Knowing the right tool for the job is even more powerful now because it will prevent you from going down a rabbit hole the LLM thinks is just fine.
I studied Physics fundamentals even though I had a microwave or could buy an airplane ticket. And I deeply enjoyed it. I still do.
I will keep doing it with CS fundamentals. Simply because I enjoy it too much.
CS fundamentals? In this day and age it’s only important to keep up with them if you are one of the relatively few people (even in BigTech and adjacent) who are building the fundamentals or trying to get a job in one where “grinding leetCode” is important.
Before the pearl clutching starts I had to implement many of the algorithms as part of my $DayJob early in my career as a C bit twiddler across various platforms. But haven’t since 2012-2014.
Just last night I was struggling browsing / organising media on my NAS, because macOS Samba + NFS suck: "What if I build a bespoke web application I can run on the NAS to do this"
One episode of SNL and two episodes of Graham Norton Show later I had a Dockerized Vite + Go application where I can mount media directories to and browse pictures + videos with previews. It's not 100% done, but close enough for me to see if it's something I need to spend time working on
...but I can also learn stuff - by asking the LLM to teach me.
While that one was building, I stole an idea I saw on the internet and started building an agent harness that uses Qwen3.5:9b as a backend to run tools locally. I specifically asked Claude to build it in parts and explain to me how it works step by step.
Now I know a lot more about that than I did yesterday.
How can you be a good judge? You must have very strong foundations and fundamental understanding.
I don't think anyone at any level has any idea what the future is holding with this rapid pace of change. What some old timers think is going to be useful in a post-Claude world isn't really meaningful.
I think if I had limited time to prioritize learnings at the moment it would be prioritizing AI tooling comfort (e.g. getting comfortable doing 5 things shallowly in parallel) versus going super deep in understanding.
1. Summarize a page with gemini 2. Then go through it myself to see if I understand the entire page
Which can help a bit with getting up to speed.
What I'm demotivated by is all these new HN posts that are blatantly using LLMs to write, and then hiding the fact they are. Just be honest... There's nothing wrong with making a mistake, you learn from those.
I get that there's a rule against it now, but it will only filter out low-hanging fruit. I still see too many, and I don't think people will ever change in this sense.
80% of my comments lately have been about spotting these posts/comments and I feel like its not doing anything except getting me mad.
Because otherwise you are training to become a button pressing cocaine monkey?
It's not a failing of yours or anyone else's, but the idea that people will remain intellectually disciplined when they can use a shortcut machine is just not going to work.
Dictionaries have made me feel like studying languages is pointless. People, why do you think it’s still important to stay strong in languages when dictionaries exist?
In the AI era, is it still worth spending significant time reading deep CS books like Designing Data-Intensive Applications by Martin Kleppmann?
Part of my hesitation is that AI tools can generate implementations for many distributed system patterns now. At the same time, I suspect that without understanding the underlying ideas (replication, consistency, partitioning, event logs, etc.), it’s hard to judge whether the AI-generated solution is actually correct.
For those who’ve read DDIA or similar books, did the knowledge meaningfully change how you design systems in practice?
Knowledge is still power, even in the AI age. Arguably even moreso now than ever. Even if the AI can build impressive stuff it's your job to understand the stuff it builds. Also, it's your job to know what to ask the AI to build
So yes. Don't stop learning for yourself just because AI is around
Be selective with what you learn, be deliberate in your choices, but you can never really go wrong with building strong fundamentals
Edit: What I can tell you almost for certain is that offloading all of your knowledge and thinking to LLMs is not going to work out very well in your favor
I'd also second bluefirebrand's point that "it's your job to know what to ask the AI to build" - https://news.ycombinator.com/item?id=47394349
Those are great answers to the question you did ask, but I'd also like to answer a question you didn't ask: whether AI can improve your learning, rather than diminish it, and the answer is absolutely a resounding yes. You have a world-class expert that you can ask to explain a difficult concept to you in a million different ways with a million different diagrams; you have a tool that will draft a syllabus for you; you have a partner you can have a conversation with to probe the depth of your understanding on a topic you think you know, help you find the edges of your own knowledge, can tell you what lies beyond those edges, can tell you what books to go check out at your library to study those advanced topics, and so much more.
AI might feel like it makes learning irrelevant, but I'd argue it actually makes learning more engaging, more effective, more impactful, more detailed, more personalized, and more in-depth than anyone's ever had access to in human history.