Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.
We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.
Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.
At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...
And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.
They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.
But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.
Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.
And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.
I appreciate that the author understands why doing everything "the old way" is good. AI is a tool, it can't be a replacement for how you think and it can't be a replacement for the actual work.
I wish more people had a desire for the inner workings of things because it makes you better at actually using tools. Implementing compilers, databases, OSes, control systems, etc. is like practicing swimming. Yeah, you might not ever swim again but when you need to the muscle memory will be there when you need to get out of the ocean (I know this is a strained metaphor).
Knowing more can only be a boon to using LLMs for coding and it's really a general problem in ML. I work in a science field as hw / sw engineer and I've seen so many pure data science people say they can replace all our work with a model, flail for 2 years and then their whole org gets canned. If they just read a textbook or collaborated (which they never do, no matter how polite you are), they'd have been able to leverage their data science skills to build something great and instead they just toil away never making it past step 0.
Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.
I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”
It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.
I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:
https://m.youtube.com/watch?v=J1W1CHhxDSk
But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.
I’ve spent a lifetime teaching myself programming, computers, and engineering. I have no formal education in these disciplines and find that I excel due to the self-taught nature of my background.
I take a very metered approach to AI and use it for autocomplete while still scrutinizing every token (not the AI kind) as well as an augment to my self-pedagogy. It’s great to be able to “query” or get a summary from a set of technical documents on demand.
However, I don’t understand the desire to remove oneself from the process with AI. I simply don’t do anything that won’t teach me something new or improve my existing skills.
There’s more to engineering than simply programming. Both the engineer and the intended user base must also understand the system. The value lost is greater than the sum of all the parts when an LLM produces most or all of the code.
I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.
In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.
Curious to see how the market adapts and how people find ways to communicate this ability for nuance.
> One solution to this constant companion problem: Spend more time with your phone out of easy reach. If it’s not nearby, it won’t be as likely to trigger your motivational neurons, helping clear your brain to focus on other activities with less distraction.
Reminds me of this study: "The mere presence of a smartphone reduces basal attentional performance"
The effect persisted even when the phone was switched off. It only went away when the phone was moved to a different part of the building.
I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?
I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!
fine, but the gym analogy breaks down somewhere. in a gym, the person who actually lifts heavier gets noticed. in software, the person with the right bio and the right network gets noticed, regardless of whether they've ever lifted anything real.
you can spend three years learning compilers properly and have a handful of readers. someone else ships a wrapper on a saturday and lands a pmarca quote tweet by monday.
coding the old way is good for you. i'm not convinced it's what gets you noticed. the strain was never really what got rewarded in the first place.
Recently I’ve been trying to combat this by learning things “deeper” IE. yes I can secure and respond to container based threats but how do containers actually work deep down?
So far I think it’s working well and as an odd plus it’s actually helping me use AI more efficiently when I need to.
The old way?! So not using AI is already being called "the old way"?!!
That statement sends alarm bells off about writing on the internet and trust to be put into it as if I'm the first one to notice it.
What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.
It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.
If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.
But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.
Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.
This would probably require cooperation during model training, but now that I think of it, is there adversarial research on LLM? Can you design text data specifically to mess with LLM training? Like what is the 1MB of text data that if I insert it into the training set harms LLM training performance the most?
the old way which is about one year ago?
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
Typing and thinking in English is demonstrably slower than in code/the abstract (Haskell for me.)
And no, I didn't write English plans before AI. Or have a stream of English thought in my head. Or even pronounce code as I read and wrote it. That's low-skill stuff.
1. It increases the chances of any bugs being found and resolved.
2. It encourages the author to be more careful with their code to avoid long reviews with a lot of findings.
3. It ensures at least two people - the author and approverd - have familiarity with the code.
4. It spreads responsibility for the code across at least two people - the author and approvers.
It's clear this article's author does not review their own code. I sure hope that code is not used for anything important.
> 15 years of Clojure experience
My God I’m old.
> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
Twenty whole minutes. Us old-timers (I am 39) are chortling.
I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.
I do the former for fun. The latter to provide for my family.
There is a reason old men take on hobbies like woodworking and fixing old cars and other stuff that has been replaced by technology.
(I swapped the title for the subtitle earlier because I thought it was more informative. What I missed was the flamebaity effect that "the old way" would have. Obvious in hindsight!)