Don't lecture me on basins of attraction--we all know HK is a great programmer.
If you feel the need to do this, it’s time to move onto a tool you trust?
Claude Code changes all the time—it's the whole shitty trend of the day—but you can't tell which of those changes are better or worse from analyzing results on independent novel tasks.
And you're baking in certain conclusions: "HOLDING / SUSPECTED REGRESSION / CONFIRMED REGRESSION / INCONCLUSIVE". Where's an option for "better than previous baseline"? Seems certainly possible that a session could have better-than-average numbers on the measured things.
Overall, though, there's just so much here that's just uncontrolled. The most obvious thing that isn't controlled for is the work itself. What does the typical software project look like? A continued accumulation of more code performing more features? What's gonna make an LLM-based agent have to do more work? Having to deal with a larger, more complicated codebase. Nothing in this seems to attempt to deal with the possibility that a session that got labeled a regression might have actually been scored even lower against a month ago's Claude Code.
"It's harder to read code than to write code" and "codebases take more effort to modify over time as they grow" are ancient observations.
Drift detection would require static targets and frequent re-attempts.
I use it everyday and haven't seen worsening. (It's definitely not static but the general trend has been good.) But I use it on a codebase that was already very complex before we started using these tools, where overall every three months or so has brought significant improvements in usability and accuracy.
Anyone know of any other similar tools that allow you to track across harnesses, while coding?
Running evals as a solo dev is too cost restrictive I think.
This project is somewhat unconventional in its approach, but that might reveal issues that are masked in typical benchmark datasets