- If it's an internal project (like migrating from one vendor to another, with no user impact) then it takes as long as I can convince my boss it is reasonable to take.
- If it's a project with user impact (like adding a new feature) then it takes as long as the estimated ROI remains positive.
- If it's a project that requires coordination with external parties (like a client or a partner), then the sales team gets to pick the delivery date, and the engineering team gets to lie about what constitutes an MVP to fit that date.
Is it going to take more than two days?
Is it going to take more than two weeks?
Is it going to take more than two months?
Is it going to take more than two years?
If you can answer these questions, you can estimate using a confidence interval.
If the estimate is too wide, break it down into smaller chunks, and re-estimate.
If you can't break it down further, decide whether it's worth spending time to gather information needed to narrow the estimate or break it down. If not, scrap the project.
I’ve never worked on anything large in software where the time it will take can be reasonably deduced at the accuracy some people here seem to assume possible. The amount of unknown-unknowns is always way way too large and the process of discovery itself extremely time consuming. Usually it requires multiple rounds of prototypes, where prototypes usually require a massive amount of data transferred to adequately mine for work discovery.
The best you can do is set reasonable expectations with stakeholders around:
- what level of confidence you have in estimates at any point in time
- what work could uncover and reduce uncertainty (prototypes, experiments, hacks, hiring the right consultant, etc) and whether it is resourced
- what the contingency plans are if new work is discovered (reducing specific scope, moving more people (who are hopefully somewhat ramped up), moving out timelines)
They're not perfect (nothing is), but they're actually pretty good. Every task has to be completable within a sprint. If it's not, you break it down until you have a part that you expect is. Everyone has to unanimously agree on how many points a particular story (task) is worth. The process of coming to unanimous agreement is the difficult part, and where the real value lies. Someone says "3 points", and someone points out they haven't thought about how it will require X, Y, and Z. Someone else says "40 points" and they're asked to explain and it turns out they misunderstood the feature entirely. After somewhere from 2 to 20 minutes, everyone has tried to think about all the gotchas and all the ways it might be done more easily, and you come up with an estimate. History tells you how many points you usually deliver per sprint, and after a few months the team usually gets pretty accurate to within +/- 10% or so, since underestimation on one story gets balanced by overestimation on another.
It's not magic. It prevents you from estimating things longer than a sprint, because it assumes that's impossible. But it does ensure that you're constantly delivering value at a steady pace, and that you revisit the cost/benefit tradeoff of each new piece of work at every sprint, so you're not blindsided by everything being 10x or 20x slower than expected after 3 or 6 months.
It took 6 months. Why? Well it was a legacy app, and we learned that passwords were case insensitive because the customer sent a video of him entering his password that failed. On the video, we could see a sticky note on his monitor with the password written on it.
When we made all the necessary changes, the docker file failed to build. SRE accidentally deleted the deprecated image with PHP that had reached EOL.
Estimating is always fun.
1. Economy (bare functionality, prefer low cost and fast delivery over reliability and longevity)
2. Mid tier (good enough quality and reliable but no frills)
3. Luxury (all the bells and whistles)
The business will want all the bells and whistles of 3 but also the pragmatism of 2 but the budget and timeline of 1. So, they don’t actually pick themselves, I choose for them based on the circumstances and we negotiate the finer points.
Devs can gold plate the shit out of any project, but having a plan for #1 is helpful when shit hits the fan and the business needs you to pivot. More than that, it’s useful during the negotiation to illustrate the ramifications of shortcuts. It’s helped me more than a few times to avoid some really stupid decisions from panicking PMs and execs.
IMO this is also a better way to communicate with stakeholders outside the team instead of committing to a specific date. It gives more context and clearly communicates that this is a probability game after all since there are quite few moving variables.
No, when I have hours, I am laser-focused on pissing off the manager who gave me so little necessary time to do the task. :-)
Very often something like "6-12 months" is a good enough estimate. I've worked in software a long time and I really don't get why many people think it's impossible to give such an estimate. Most of us are developing glorified CRUD apps, it's not rocket science. And even rocket science can be estimated to a usable degree.
Really you have no idea if feature X is going to take 1 day or 1 year?
Whereas the worst overruns are nuclear power plants; which are either switched on and working, or completely useless and taking up space!
So my takeaways were: try to make estimates modular and work out how to carry learnings as your project goes on, and you’ll have an easier time hitting your estimates - and probably get a great reputation for delivering!
This is the perfect approach, given that estimates are top down and work to fill the estimate is bottom up.
"When I estimate, I extract the range my manager is looking for, and only then do I go through the code and figure out what can be done in that time."
For us, an accurate delivery date on a 6 month project was mandatory. CX needed it so they could start onboarding high priority customers. Marketing needed it so they could plan advertising collateral and make promises at conventions. Product needed it to understand what the Q3 roadmap should contain. Sales needed it to close deals. I was fortunate to work in a business where I respected the heads of these departments, which believe it or not, should be the norm.
The challenge wasn't estimation - it's quite doable to break a large project down into a series of sprints (basically a sprint / waterfall hybrid). Delays usually came from unexpected sources, like reacting to a must have interruption or critical bugs. Those you cannot estimate for, but you can collaborate on a solution. Trim features, push date, bring in extra help, or crunch. Whatever the decision, making sure to work with the other departments as colaborators was always beneficial.
They would much rather confidently repeat a date that is totally unfounded rubbish which will have to be rolled back later, because then they can blame the engineering team for not delivering to their estimate.
I wonder if it was a mistake to ever call it "engineering", because that leads people to think that software engineering is akin to mechanical or civil engineering, where you hire one expensive architect to do the design, and then hand off the grunt work to lower-paid programmers to bang out the code in a repetitive and predictable timeline with no more hard thinking needed. I think that Jack Reeves was right when he said, in 1992, that every line of code is architecture. The grunt work of building it afterward is the job of the compiler and linker. Therefore every time you write code, you are still working on the blueprint. "What is Software Design?"<https://www.bleading-edge.com/Publications/C++Journal/Cpjour...>
Martin Fowler cites this in his 2005 essay about agile programming, "The New Methodology"<https://www.martinfowler.com/articles/newMethodology.html>. Jeff Atwood, also in 2005, explains why software is so different from engineering physical objects, because the laws of physics constrain houses and bridges and aircraft. "Bridges, Software Engineering, and God"<https://blog.codinghorror.com/bridges-software-engineering-a...>. All this explains not only why estimates are so hard but also why two programs can do the same thing but one is a thousand lines of code and one is a million.
I came into programming from a liberal arts background, specifically writing, not science or math. I see a lot of similarities between programming and writing. Both let you say the same thing an infinite number of ways. I think I benefitted more from Strunk and White's advice to "omit needless words" than I might have from a course in how to build city hall.
Step 1: Customer <-> Sales/Product (i.e., CEO). Step 2: Product <-> Direct to Engineering (i.e., CTO)
The latency between Step1 and Step2 is 10 minutes. CEO leaves the meeting takes a piss and calls the CTO.
- Simple features take a day: CTO to actual implementation latency depends on how hands on the CTO is. In good startups CTO is the coder. Most features will make its way into the product in days.
- Complex Features take a few days: This is a tug of war between CTO - CEO and indirectly the customer. CTO will push back and try to hit a balance with CEO while the CEO works with the customer to find out what is acceptable. Again latency is measured by days.
Big companies cannot do this and will stifle your growth as an engineer. Get out there and challenge yourselves.
> estimates are not by or for engineering teams.
It's surprising the nuance and variety of how management decisions are made in different orgs, a lot depends on personalities, power dynamics and business conditions that the average engineer has almost no exposure to.
When you're asked for an estimate, you've got to understand who's asking and why. It got to the point in an org I worked for once that the VP had to explicitly put a moratorium on engineers giving estimates because those estimates were being taken by non-technical stakeholders of various stripes and put into decks where they were remixed and rehashed and used as fodder for resourcing tradeoff discussions at the VP and executive level in such a way as to be completely nonsensical and useless. Of course these tradeoff discussions were important, but the way to have them was not to go to some hapless engineer, pull an overly precise estimate based on a bunch of tacit assumptions that would never bear out in reality, and then hoist that information up 4 levels of management to be shown to leadership with a completely different set of assumptions and context. Garbage in, garbage out.
These days I think of engineering level of effort as something that is encapsulated as primarily an internal discussion for engineering. Outwardly the discussion should primarily be about scope and deadlines. Of course deadlines have their own pitfalls and nuance, but there is no better reality check for every stakeholder—a deadline is an unambiguous constraint that is hard to misinterpret. Sometimes engineers complain about arbitrary deadlines, and there are legitimate complaints if they are passed down without any due diligence or at least a credible gut check from competent folks, but on balance I think a deadline helps engineering more than it hurts as it allows us to demand product decisions, designs, and other dependencies land in a timely fashion. It also prevents over-engineering and second system syndrome, which is just as dangerous a form of scope creep as anything product managers cook up when the time horizon is long and there is no sense of urgency to ship.
This is mostly fine when it’s the tooling that does the translating based on rolling historical averages - and not engineers or managers pulling numbers out of their rear.
This is a cop-out. Just because you can’t do it, doesn’t mean it’s impossible :)
There are many types of research and prototyping project that are not strongly estimable, even just to p50.
But plenty can be estimated more accurately. If you are building a feature that’s similar to something you built before, then it’s very possible to give accurate estimates to, say, p80 or p90 granularity.
You just need to recognize that there is always some possibility of uncovering a bug or dependency issue or infra problem that delays you, and this uncertainty compounds over longer time horizon.
The author even gestures in this direction:
> sometimes you can accurately estimate software work, when that work is very well-understood and very small in scope. For instance, if I know it takes half an hour to deploy a service
So really what we should take from this is that the author is capable of estimating hours-long tasks reliably. theptip reports being able to reliably estimate weeks-long tasks. And theptip has worked with rare engineers who can somehow, magically, deliver an Eng-year of effort across multiple team members within 10% of budget.
So rather than claim it’s impossible, perhaps a better claim is that it’s a very hard skill, and pretty rare?
(IMO also it requires quite a lot of time investment, and that’s not always valuable, eg startups usually aren’t willing to implement the heavyweight process/rituals required to be accurate.)
But the author's assessment of the role that estimates play in an organization also rings true. I've seen teams compare their estimates against their capacity, report that they can't do all this work; priorities and expected timelines don't change. Teams find a way to deliver through some combination of cutting scope or cutting corners.
The results are consistent with the author's estimation process - what's delivered is sized to fit the deadline. A better thesis might have been "estimates are useless"?
Hogwash. Has this person never run a business, or interacted with those who have? The business depends on estimates in order to quantitatively determine how much time, money, and resources to allocate to a project. Teams in the manufacturing and construction fields deliver estimates all the time. Why shouldn't IT people be held to the same standard?
If you can't estimate, it's generally because your process isn't comprehensive enough. Tim Bryce said it's very straightforward, once you account for all the variables, including your bill of materials (what goes into the product), and the skill level and effectiveness rating (measured as the ratio of direct work to total time on the job) of the personnel involved. (You are tracking these things, aren't you?)
https://www.modernanalyst.com/Resources/Articles/tabid/115/I...
> The pro-estimation dogma says that these questions ought to be answered during the planning process, so that each individual piece of work being discussed is scoped small enough to be accurately estimated. I’m not impressed by this answer. It seems to me to be a throwback to the bad old days of software architecture, where one architect would map everything out in advance, so that individual programmers simply had to mechanically follow instructions.
If you're not dividing the work such that about ~60% of the time is spent in analysis and design and only ~15% in programming, you've got your priorities backwards. In the "bad old days", systems got delivered on time and under budget, and they shipped in working order, rather than frustrating users with a series of broken or half-working systems. This is because PRIDE, the scientific approach to systems analysis and design, was the standard. It still is in places like Japan. Not so much America, where a lot of software gets produced it's true, but very little of it is any good.
The key is to keep data on how long past projects actually took (which not a lot of organizations do). But once you have that real data, you can understand all the unknown unknowns that came up and assume that similar things will come up on the new project.
Timelines can be estimated approximately.
I’ve never had a construction project finish exactly on time, but that doesn’t mean estimates are unwise.
One thing that I'd like to understand then is _why_... Why doesn't management use a more direct way of saying it? Instead of asking for estimates, why don't they say: we have until date X, what can we do? Is it just some American way of being polite? I am sincerely curious :)
I always tell my teams just skip the middlemen and think of estimates as time from the jump. It's just easier that way. As soon as an estimate leaves an engineer's mouth, it is eagerly translated into time by everyone else at the business. That is all anyone else cares about. Better said - that is all anyone else can understand. We humans all have a shared and unambiguous frame of reference for what 1 hour is, or what 1 day is. That isn't true of any other unit of software estimation. It doesn't matter that what one engineer can accomplish in 1 hour or 1 day is different from the next. The same is true no matter what you're measuring in. You can still use buffers with time. If you insist on not thinking of your labor in terms of hours spent, you can map time ranges to eg. points along the Fibonacci sequence. That is still a useful way to estimate because it is certainly true as software complexity goes up, the time spent on it will be growing non-linearly.
- “That seems doable, but I’ll let you know if any problems arise.”
- “That is going to be really tight. I’ll do my best, but if I think it can’t be done in that timeframe, I’ll let you know by the halfway point.”
- “I can’t get that done that fast. I’ll need more time.”
In the third case, when they follow up with “How much more?” I’ll give them a timeframe that fits the second case and includes the notification plan.
If you get 10 tasks of seemingly equal duration, 9 will go well and 1 will hit a reef of unexpected troubles and take forever.
So the practice of doubling is not that stupid. It leaves time in the first 9 to deal with the unexpected disaster.
I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.
Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.
I do a lot of fussy UI finesse work, which on the surface are small changes, so people are tempted to give them small estimates. But they often take a while because you’re really learning what needs to be done as you’re doing it.
On the other end of the spectrum I’ve seen tickets that are very large in terms of the magnitude of the change, but very well specified and understood — so don’t actually take that long (the biggest bottleneck seems to be the need to break down the work into reviewable units).
In the LLM age, I think the ambiguity angle is going to much more apparent, as the raw size of the change becomes even less of an input into how long it takes.
in that sense, estimation should theoretically become a more reasonable endeavor. or maybe not, we just end up back where we are because the llm has produced unusable code or an impossible-to-find bug which delays shipment etc.
I've worked on multiple teams at completely different companies years apart that had the same weird rules around "story points" for JIRA: Fibbonacci numbers only, but also anything higher than 5 needs to be broken into subtasks. In practice, this just means, 1-5, except not 4. I have never been able to figure out why anyone thought this actually made any practical sense, or whether this apparently is either common enough to have been picked up by both teams or if I managed to somehow encounter two parallel instances of these rules developing organically.
They didn't estimate in 'Story Points'. They used atomic physical constraints.
He described it like this:
There was a standardized metric for all manual operations like "reach, one hand, 18-24 inches" or "pick item 10-100g." Each step had a time in decimal seconds... The objective was to minimize the greatest difference in station time so that no line worker is waiting.
The most interesting part was his conclusion on the result: Modern supply management is a miracle, but manual labor today is much harsher... The goal back then was flow; the goal now is 100% utilization.
It feels like in software, we are moving toward that "100% utilization" model (ticket after ticket) and losing the slack that made the line work.
When someone comes at you for an estimate, you need to be asking for the time budget or expected schedule — not estimating.
I failed to understand this for most of my career. Someone would ask me for an estimate, and I would provide one. But without knowing the expected schedule, the estimate is always either too high or too low.
Scope is always flexible. The feature or commitment is just a name and a date in people’s heads. Nobody but engineers actually care about requirements. Adjust scope to fit the date, everyone is happy. Adjust the date to fit the scope and people will think you’re either late or fooling them.
At my team we think in terms of deliverables and commitments: "I can commit til deliver this by that date under these circumstances".
This mitigated the diverse nature Og thinking.
Choose 2. For example a large feature set can be made quickly, but it will be of poor quality.
Note that cost is somewhat orthogonal, throwing money at a problem does not necessarily improve the tradeoff, indeed sometimes it can make things worse.
Am I misinterpreting things or there is no overlap with the circumstances argued in the OP? Also, in that case, how do we make quality tradeoffs when all features are necessary for the end product?
Fixed time and fixed scope is essentially impossible, except in trivial cases. What I read the author saying is that he chooses to make it fixed time and has flexibility around scope in his work, because the requirements are more like loose descriptions than a description of exactly what a product should do, while ignoring edge-cases. That sounds like a nice situation. And a perfectly fine way to manage an engineering team. But it also sounds a bit to me like an abdication of responsibility to the engineering team by product, to allow the engineering team to decide what exactly the scope is. Again, that’s a perfectly good way to do it, but it means that product can’t come back and say “that’s not what I was expecting, you didn’t do it.”
I don’t think the author really tackles estimation here, nor the reasons why estimation is a hard and controversial issue, nor what junior engineers are looking for when googling “how do I estimate?”
The real reason it’s hard in this industry is that in general, product controls both scope and time, which are the two major dials by which delivery is managed, but abdicate responsibility for them by going an ill-defined but nonetheless more fixed (and unyielding) scope than described in this article, then demanding engineers give them specific date estimates to which they’ll commit, and work free overtime if they turn out to be wrong.
The author correctly defines a way to resolve this conflict: give engineering more say over scope—but fails to recognize that the root cause is not poor estimation, but rather that product or management denies engineering much say over scope past the initial estimation, and then demands they set fixed dates they commit to before enough is known. Death march projects, in my experience, are generally a failure of product, not engineering.
Multiply your estimate by 3.14159 until you find the actuals and your more accurate estimating coefficient.
https://news.ycombinator.com/item?id=28667174 (2013)
original: https://web.archive.org/web/20170603123809/http://www.tuicoo...
Our team would simply gather around, go through the tasks that were agreed with the business and on count of three, each of us simply raise a thumbs up if we thought we could ship it within two days - otherwise thumbs down.
It generally implied we collectively thought a task would take more than two days to ship, it may require breaking down, otherwise it’s good to go.
My favourite parts:
> My job is to figure out the set of software approaches that match that estimate. […]
> Many engineers find this approach distasteful. […]
> If you refuse to estimate, you’re forcing someone less technical to estimate for you.
Even after many years, I still find it distasteful sometimes but I have to remind myself what everyone gets paid for at the end of the day.
This is exactly how all good art is done. There's an old French saying, une toile exige un mur.
An "accurate estimation" is an oxymoron. By definition, an estimate is imprecise. It only serves to provide an idea of the order of magnitude of something: will this work take hours? days? weeks? months? You can't be more accurate. And this does not apply only to software development.
1. When you consider planning, testing, documentation, etc. it takes 4 hours to change a single line of code.
2. To make good estimates, study the problem carefully, allow for every possibility, and make the estimate in great detail. Then take that number and multiply by 2. Then double that number.
Software Estimation: Demystifying the black art by Steve McConnell should be 1st year reading in any software development major in college...
We've largely "solved" this problem in the industry we just have a problem of getting people to read and read the right things
Yes, he was telling me this tongue in cheek, but in my actual experience this has been eerily accurate.
Planning is inaccurate, frustrating, and sadly necessary.
usually means there's hidden complexity i haven't found yet. i estimate until the subtasks are small enough (like 4h chunks), otherwise its all just feeling based numbers.
> Estimates define the work, not the other way around
> The standard way of thinking about estimates is that you start with a proposed piece of software work, and you then go and figure out how long it will take. This is entirely backwards. Instead, teams will often start with the estimate, and then go and figure out what kind of software work they can do to meet it.
So true. But there are times when the thing to be built is known and an estimate is needed [for political reasons, as TFA explains], which is why sometimes it's the other way around.
The only reasonable way to estimate something is in work hours. Everything else is severely misguided.
Also, if you don't follow up any estimate is meaningless.
Estimation can be done. It's a skillset issue. Yet the broad consensus seems to be that it can't be done, that it's somehow inherently impossible.
Here are the fallacies I think underwrite this consensus:
1. "Software projects spend most of their time grappling with unknown problems." False.
The majority of industry projects—and the time spent on them—are not novel for developers with significant experience. Whether it's building a low-latency transactional system, a frontend/UX, or a data processing platform, there is extensive precedent. The subsystems that deliver business value are well understood, and experienced devs have built versions of them before.
For example, if you're an experienced frontend dev who's worked in React and earlier MVC frameworks, moving to Svelte is not an "unknown problem." Building a user flow in Svelte should take roughly the same time as building it in React. Experience transfers.
2. "You can't estimate tasks until you know the specifics involved." Also false.
Even tasks like "learn Svelte" or "design an Apache Beam job" (which may include learning Beam) are estimable based on history. The time it took you to learn one framework is almost always an upper bound for learning another similar one.
In practice, I've had repeatable success estimating properly scoped sub-deliverables as three basic items: (1) design, (2) implement, (3) test.
3. Estimation is divorced from execution.
When people talk about estimation, there's almost always an implicit model: (1) estimate the work, (2) "wait" for execution, (3) miss the estimate, and (4) conclude that estimation doesn't work.
Of course this fails. Estimates must be married to execution beat by beat. You should know after the first day whether you've missed your first target and by how much—and adjust immediately.
Some argue this is what padding is for (say, 20%). Well-meaning, but that's still a "wait and hope" mindset.
Padding time doesn't work. Padding scope does. Scope padding gives you real execution-time choices to actively manage delivery day by day.
At execution time, you have levers: unblock velocity, bring in temporary help, or remove scope. The key is that you're actively aiming at the delivery date. You will never hit estimates if you're not actively invested in hitting them, and you'll never improve at estimating if you don't operate this way. Which brings me to:
4. "Estimation is not a skillset."
This fallacy is woven into much of the discourse. Estimation is often treated as a naïve exercise—list tasks, guess durations, watch it fail. But estimation is a practicable skill that improves with repetition.
It's hard to practice in teams because everyone has to believe estimation can work, and often most of the room doesn't. That makes alignment difficult, and early failures get interpreted as proof of impossibility rather than part of skill development.
Any skill fails the first N times. Unfortunately, stakeholders are rarely tolerant of failure, even though failure is necessary for improvement. I was lucky early in my career to be on a team that repeatedly practiced active estimation and execution, and we got meaningfully better at it over time.
1. What is different in software engineering with respect to any other work that require exploration?
The author mentions "it requires research, it's why it's impossible". But plenty of work requires research and people doing it are also asked to provide an estimate: writing a book, managing a complicated construction project, doing scientific research, ...
In all of this, it is also well known that time estimation is tricky and there are plenty of examples of deadline not met. Yet, it looks like that these people understand 1) that their estimations are guesses, 2) that still giving an estimation is useful for their collaborators.
I've worked in academic research, and famously, you sometimes need to write a document for a grant detailing the timeline of your project for the next two years. We all knew what it was (an estimation that will deviate from reality), but we understood why it was needed and how to do it.
I now work as researcher in the private sector, sometimes very closely with the software developers, sometimes doing the same work as them, so I have a strong experience of what it is asked. And I'm often surprised how often software developers are thinking that they are "special" when they have to deal with something that a lot of other persons have to deal with too, and how often they are all lost by this situation while other persons manage to go around it pragmatically.
2. Why is so many of these articles not reflecting in a balanced way on why people asked time estimates?
When the article comes to explain why developers are asked for estimate, the main reason seems to be "because non developers are idiots, or because of the checking box system, or because of the big bad managers who want to justify their role, or because it is the metric to judge the quality of the work".
But at the same time, if they need something, the same developers asks for time estimate all the time. This is just something needed to organize yourself. If you know that the builders will work in your home for 6 months, you know that you need to prepare yourself differently than if it is 2 days. And how many time a developer asked for something, did not get it in time, and did not conclude that it demonstrates the worker was incompetent? (I'm sure _you_ don't do that, rolling my eyes at the usual answer, but you have to admit that such conclusion is something that people do, including developers)
Why in these articles, there is never reflection on the fact that if you don't give any estimate, your colleagues, the people you are supposed to work with, and not against, don't have the information they need to work properly? The tone is always adversarial: the bad guys want a time estimate. And, yes, of course, we have situations where the admin becomes the goals and these requests are ridiculous. But on the other hand, I also understand that developers are asked to follow more process when at the same time they act like teenage-rebel condescending kids. I'm not sure what is the distribution, but even if it is not 50-50, it tells so much about the level of reflection when the article is unable to conceive that, maybe, maybe, sometimes, the developer is not the victim genius surrounded by idiots.
(in fact, in this article, there is the mention of "Some engineers think that their job is to constantly push back against engineering management, and that helping their manager find technical compromises is betraying some kind of sacred engineering trust". But, come on, this is a terrible flaw, you should be ashamed of being like that. This sentence is followed by a link to an article that, instead of highlighting how this behavior should be considered as a terrible flaw, frames it as "too idealistic")
First impressions with this is they give really long estimates.
Also, due to coding agents, you can have them completely implement several different approaches and find a lot of unknown unknowns up front.
I was building a mobile app and couldn’t figure out whether I wanted to do two native apps or one RN/Expo app. I had two different agents do each one fully vibe coded and then tell me all the issues they hit (specific to my app, not general differences). Helped a ton.