2025: Stopped writing so much Java, so used VSCode exclusively for Python, TS etc, with Claude Code or Cline.
2026: Time is split between Codex App (40%), Claude App (30%) and VSCode with Claude Code (30%).
Some other thoughts:
* Overall I feel like opening an IDE in the traditional sense is coming to an end.
* Tech-stack wise I am much more open to trying out new things than before since LLMs will help with the setup and debugging.
* For small teams like ours, code reviews are the bottleneck, and we constantly have to decide what code we review vs what we don't.
* Building seems easy these days, but (1) so much competition no in every field, (2) much more product polish is expected than before, and (3) most products compete with Claude if they realize this or not.
For "AI"? I sometimes paste some code into chatgpt.com and ask for assistance, but I don't have an specific integration setup.
I have LSP setup for python, golang, and some smart configurations for YAML, JSON, etc. I have all my work logs in org-mode.
Since then I do all my dev work with Claude Code. And currently pulling in more and more workflows into Codehydra including PRs and Issue management.
FileZilla for FTP.
HeidiSQL and MariaDB for SQL.
1Remote for multi-session SSH.
KeePass for passwords, keys, configuration files or any sensitive data.
Occasionally I use Notepad++, VS Codium or Zed when I need to.
A few things I've learned:
Tests were important before AI and are even more important now. AI can introduce unintended outcomes in subtle ways, so having a strong test suite isn't optional — it's your safety net against changes you didn't fully review.
Just because you can doesn't mean you should. AI makes it trivially easy to refactor, swap frameworks, rebuild entire modules. But it doesn't mean you should rework everything daily to chase the newest JS framework. Restraint matters more now, not less.
The biggest limitation: AI follows patterns from the most common repos. For standard work that's fine — often it's exactly what you need. But when you hit what I'd call "unsolved problems" — things that don't have an industry-accepted solution yet — AI falls apart. It'll confidently generate something that looks right but isn't. That's where knowing how to plan, prompt with the right context, and recognise when the output is wrong becomes the actual skill.
I am riding this tech into the ground and have been working since 2008, off and on, to shut down anyone who is using it and migrate them to modern platforms. And still getting contracts to do so! I have done your standard modern SaaS gigs as well, but these days I'm finding shutdown efforts of legacy tech is enjoyable work, while playing the startup/SaaS game is not.
- Helix editor (no LSP, no plugins)
- Workmux for Git worktree and tmux automation
- Nushell and iTerm
- Claude Code for code generation
Had to move to Linux because I wanted to make some programs related to network-programming (XDP, libnetfilter_queue), and Linux provides all the tools I need.
I've only used VS Code a few times.
The only thing I lack is a home network setup, to fully test those programs.
When I'm using an LLM to generate technical debt, it's Visual Studio Code and GitHub's AI tool, that I can't remember the name of.
inside the session using nvim for edits, terminal panes for running tests / commands etc, and increasingly pi as a coding agent instead of claude code.
i sometimes toy around with orchestration projects like capy.ai or conductor but haven't really been impressed.
probably worth noting that usually all code i push will have been written by me. even if LLMs can output the same i find it's usually faster to implement it myself compared to convincing myself that the LLM output is correct.
VSCode/GH copilot -> windsurf -> Zed/Claude code -> Zed/codex -> Zed/opencode -> Antigravity/opencode
I'm only using antigravity cause they have good limits for now .. (but it we be matter of time before it will go away and then go back to Zed)