>> Small decisions have to be made by design/eng based on discovery of product constraints, but communicating this to stakeholders is hard and time consuming and often doesn’t work.
This implies that a great deal of extraneous work and headaches result from the stakeholders not having a clear mental model of what they need software to do, versus what is either secondary or could be disposed of with minor tweaks to some operational flow, usage guidance, or terms of service document. In my experience: Even more valuable than having my own mental model of a large piece of software, is having an interlocutor representing the stakeholders and end users, who understands the business model completely and has the authority to say: (A) We absolutely need to remove this constraint, or (B) If this is going to cost an extra 40 hours of coding, maybe we can find a workflow on our side thet gets around it - or find a shortcut, and shelve this for now so you can move on with the rest of the project.
Clients usually have a poor understanding of where constraints are and why some seemingly easy problems are very hard, or why some problems that seem hard to them are actually quite easy. I find that giving them a clear idea of the effort involved in each part of fulfilling a request often leads to me talking to someone directly who can make a call as to whether it's actually necessary.
> Coding agents are designed to be accommodating, it doesn’t push back against prompts since it neither has the authority nor the context to do so. It may ask for clarifications upon what was specified, but it won’t say “wait, have you considered doing X instead?” A human developer would, or at least, they’d raise a flag. An LLM produces plausible output and moves on.
> This trait may be desirable as a virtual assistant, but it makes for a bad engineering teammate. The willingness to engage in productive conflict is part and parcel to good engineering: it helps broaden the search in the design space of ideas.
Whenever non-technical people ask me about LLMs, I tell them this - The goal of an LLM is not to give you correct answers. The goal of an LLM is to continue the conversation.
You ask the coding assistant for a brand new feature.
The coding assistant says, we have two or three or four different paths we could go about doing it. Maybe the coding assistant can recommend a specific one. Once you pick the option, the coding assistant can ask more specific questions.
The database looks like this right now, should we modify this table which would be the simplest solution, or create a new one? If you will in the future want a many-to-one relationship for this component, we should create a new table and reference it via a join table. Which approach do you prefer?
What about the frontend, we can surface controls for this in on our existing pages, however for reasons x, y, and z I'd recommend creating a new page for the CRUD operations on this new feature. Which would you prefer?
Now that we've gotten the big questions squared away, do you want to proceed with code generation, or would you like to dig deeper into either the backend or the frontend implementation?
Once you have a comprehensive plan together, or a fairly full context window, agents have a lot of issues zooming out. This is particularly painful in some coding agents since they're loading your existing code into context and get weighted down heavily by what already exists (which makes them good at other tasks) vs. what may be significantly simpler and better for net-new stuff or areas of your codebase that are more nascent.
> no it still doesn't work. Do I have to let OpenAI fix this or can you handle it?
I can definitely handle this. Let me try a completely different approach - …
Previously: send it and figure out issues in they fly
Now: write a half page description and ask the LLM to figure out what info is missing from my document to implement
That’s absolutely not the same as the team dynamics but in principle it seems to me that LLMs can do work that is directionally “here is a target state” and project manage towards that
It can. It totally is able to refuse and then give me options for how it thinks it should do something.
So if the AI can surface misunderstandings through fast prototyping, this can cut trough lots of meeting BS.
In practice, the truth is somewhere in the middle as always.
In summary, the user research we have conducted thus far uncovered the central tension that underlies the use of coding assistants:
1. Most technical constraints require cross-functional alignment, but communicating them during stakeholder meetings is challenging due to context gap and cognitive load
2. Code generation cannibalizes the implementation phase where additional constraints were previously caught, shifting the burden of discovery to code review — where it’s even harder and more expensive to resolve
How to get around this conundrum? The context problem must be addressed at its inception: during product meetings, where there is cross-functional presence and different ideas can be entertained without rework cost. If AI handles the implementation, then the planning phase has to absorb the discovery work that manual implementation used to provide.
They're emphasizing one thing too much and another not enough.First, the communication problem. Either the humans are getting the right information and communicating it, or they aren't. The AI has nothing to do with this; it's not preventing communication at all. If anything, it will now demand more of it, which is good.
Second, the "implementation feedback". Yes, 'additional constraints' were previously encountered by developers trying to implement asinine asks, and would force them to go back and ask for more feedback. But now the AI goes ahead and implements crap. And this is perfectly fine, because after it churns out the software in a day rather than a week, anyone who tries to use the software will see the problem, and then go back and ask for more detail. AI is making the old feedback loop faster. It's just not at implementation-time anymore.
edit/
The real danger of AI is that it's eager to build exactly what we asked for, even though it's an architecture disaster.