I because of this, the next task I gave it on the larger side, I ran its work through Codex which identified 7 glaring unfinished parts of the task.
The trend was starting the part of the task but then leaving a "skeleton" of what I has requested without any of the actual working parts.
The way I would describe it is a kid cramming his 3 month project into a Sunday evening for Monday's due date.
Cynicism aside - I do wonder what the future will hold given that current token burn rates aren't sustainable without VC cash. Anthropic even pushed us to use haiku for claude code for "many" tasks in our enterprise training, so I'm wondering if it's not a company need of sorts to reduce the burn?
In reality as they scale up, the models lose nuance and become noisier. The boosters do not want to admit this.
We need highly-specialised models/interfaces. Not one thing and trying to force-fit it.