Underneath is just a system prompt, or more likely a prompt layered on top "You are a frontend engineer, competent in react and Next.js, tailwind-css" - the stack details and project layout, key information is already in the CLAUDE.md. For more stuff the model is going to call file-read tools etc.
I think its more theatre then utilty.
What I have taken to doing is having a parent folder and then frontend/ backend/ infra/ etc as children.
parent/CLAUDE.md frontend/CLAUDE.md backend/CLAUDE.md
The parent/CLAUDE.md provides a highlevel view of the stack "FastAPI backend with postgres, Next.js frontend using with tailwind, etc". The parent/CLAUDE.md also points to the childrens CLAUDE.md's which have more granular information.
I then just spawn a claude in the parent folder, set up plan mode, go back and forth on a design and then have it dump out to markdown to RFC/ and after that go to work. I find it does really well then as all changes it makes are made with a context of the other service.
I advise people to only use subagents for stuff that is very compartmentalized because they're hard to monitor and prone to failure with complex codebases where agents live and die by project knowledge curated in files like CLAUDE.md. If your main Claude instance doesn't give a good handoff to a subagent, or a subagent doesn't give a good handback to the main Claude, shit will go sideways fast.
Also, don't lean on agents for refactoring. Their ability to refactor a codebase goes in the toilet pretty quickly.
I spent a few hours trying stuff like this and the results were pretty bad compared to just using CC with no agent specific instructions.
Maybe I needed to push through and find a combination that works but I don't find this article convincing as the author basically says "it works" without showing examples or comparing doing the same project with and without subagents.
Anyone got anything more convincing to suggest it's worth me putting more time into building out flows like this instead of just using a generic agent for everything?
Last week I asked Claude Code to set up a Next.js project with internationalization. It tried to install a third party library instead of using the internationalization method recommended for the latest version of Next.js (using Next's middleware) and could not produce of functional version of the boilerplate site.
There are some specific cases where agentic AI does help me but I can't picture an agent running unchecked effectively in its current state.
With all due respect to the .agents/ markdown files, Claude code often, like other LLMs, get fixed on a certain narrative, and no matter what the instructions are, it repeats that wrong choice over and over and over again, while “apologizing”…
Anything beyond a close and intimate review of its implementation is doomed to fail.
What made things a bit better recently was setting Gemini cli and Claude code taking turns in designing reviewing, implementing and testing each other.
My gut feeling from past experiences is that we have git, but now git-flow, yet: a standardized approach that is simple to learn and implement across teams.
Once (if?) someone will just "get it right", and has a reliable way to break this down do the point that engineer(s) can efficiently review specs and code against expectations, it'll be the moment where being a coder will have a different meaning, at large.
So far, all projects i've seen end up building "frameworks" to match each person internal workflow. That's great and can be very effective for the single person (it is for me), but unless that can be shared across teams, throughput will still be limited (when compared that of a team of engs, with the same tools).
Also, refactoring a project to fully leverage AI workflows might be inefficient, if compared to a rebuild from scratch to implement that from zero, since building docs for context in pair with development cannot be backported: it's likely already lost in time, and accrued as technical debt.
Fast decision-making is terrible for software development. You can't make good decisions unless you have a complete understanding of all reasonable alternatives. There's no way that someone who is juggling 4 LLMs at the same time has the capacity to consider all reasonable alternatives when they make technical decisions.
IMO, considering all reasonable alternatives (and especially identifying the optimal approach) is a creative process, not a calculation. Creative processes cannot be rushed. People who rush into technical decisions tend to go for naive solutions; they don't give themselves the space to have real lightbulb moments.
Deep focus is good but great ideas arise out of synthesis. When I feel like I finally understand a problem deeply, I like to sleep on it.
One of my greatest pleasures is going to bed with a problem running through my head and then waking up with a simple, creative solution which saves you a ton of work.
I hate work. Work sucks. I try to minimize the amount of time I spend working; the best way to achieve that is by staring into space.
I've solved complex problems in a few days with a couple of thousand lines of code which took some other developers, more intelligent than myself, months and 20K+ lines of code to solve.
I was working on a large-ish R analysis. In R, people generally start with loading entire libraries like
library(a)
library(b)
etc., leading to namespace clashes. It's better practice to replace all calls to package-functions with package namespaces, i.e., it's better to do
a::function_a()
b::function_b()
than to load both libraries and then blindly trusting that function_a() and function_b() come from a and b.
I asked Claude Code to take a >1000 LOC R script and replace all function calls with their model-namespace function call. It ran one subagent to look for function calls, identified >40 packages, and then started one subagent per package call for >40 subagents. Cost-wise (and speed-wise!) it was mayhem as every subagent re-read the script. It was far faster and cheaper, but a bit harder to judge, to just copy paste the R script into regular Claude and ask it to carry out the same action. The lesson is that subagents are often costly overkill.
I see people who never coded in their life signing up for loveable or some other code agent and try their luck.
What cements this thought pattern in your post is this: "If the agents get it wrong, I don’t really care—I’ll just fire off another run"
If code is a liability and the best part is no part, what about leveraging Markdown files only?
The last programs I created were just CLI agents with Markdown files and MCP servers(some code here but very little).
The feedback loop is much faster, allowing me to understand what I want after experiencing it, and self-correction is super fast. Plus, you don't get lost in the implementation noise.
https://github.com/pchalasani/claude-code-tools/tree/main?ta...
If the first CLI-agent just needs a review or suggestions of approaches, I find it helps to have the first agent ask the other CLI-agent to dump its analysis into a markdown file which it can then look at.
The idea was to encapsulate the context for a subagent to work on in a single GitHub issue/document. I’m yet to see how the development/QA subagents will fare in real-world scenarios by relying on the context in the GitHub issue.
Like many others here, I believe subagents will starve for context. Claude Code Agent is context-rich, while claude subagents are context-poor.
Ideally I would like to spin off multiple agents to solve multiple bugs or features. The agents have to use the ci in GitHub to get feedback on tests. And I would like to view it on IDE because I like the ability to understand code by jumping through definitions.
Support for multiple branches at once - I should be able to spin off multiple agents that work on multiple branches simultaneously.
> "Managing Cost and Usage Limits: Chaining agents, especially in a loop, will increase your token usage significantly. This means you’ll hit the usage caps on plans like Claude Pro/Max much faster. You need to be cognizant of this and decide if the trade-off—dramatically increased output and velocity at the cost of higher usage—is worth it."
Am I the only one convinced that all of the hype around coding agents like codex and claude is 85% BS ?