by ofirpress
7 subcomments
- [I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...
This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.
This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.
- Not “may be”: just look how swe-bench scores drop to single digits once it in C#
https://arxiv.org/html/2506.12286v3
by slacktivism123
2 subcomments
- Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.
It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".
Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...
Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969
by mustaphah
2 subcomments
- I speculate something similar (or even worse) is going on with Terminal-Bench [1].
Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.
[1] https://www.tbench.ai/leaderboard
- epochs ago when random forest was part of machine learning nomenclature, we had a strong claim from an adjacent team in the form of a powerpoint circulated upwards that they had achieved almost perfect prediction accuracy.
We relatively quickly identified that the testing set are taken directly from the training set, but the claim has been advertised already so they were more difficult to retract... if it were at all, I left shortly after.
The incentives are not aligned with accurate reporting.
by zelphirkalt
2 subcomments
- Can anyone tell me what is the difficulty in simply not having .git at all during a benchmark run? Why not simply remove anything that is not the code the benchmark runs on? Or just simple oversight?
- I'm not surprised. People really thought the models just kept getting better and better?
- swe-bench's bigger problems include (1) labs train on the test and (2) 50% of the tickets are from django; it's not a representative dataset even if all you care about is Python.
I created a new benchmark from Java commits that are new in the past 6 months to add some variety: https://brokk.ai/power-ranking
- hah the model should get extra credit for discovering this!
> Now I understand the situation perfectly! The issue described in the problem statement is a real bug that was already identified and fixed in later versions of pytest. Since we're working with pytest 5.2.4, we need to apply the same fix.
https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...
by jasonjmcghee
1 subcomments
- Very interested to see the updated results. This could really shake up the leaderboard.
by zaptheimpaler
3 subcomments
- It's honestly ridiculous they left git history lying around during a benchmark, and this benchmark made to ICLR in Jan 2024 and no one has detected this issue until now. I don't really trust any benchmarking or tools or claims from this space when they can make such huge basic errors.
by epolanski
1 subcomments
- This is beyond sad and shameful.
- In the meawhile, Oracle stock went up 40% in one one day, based on what Wall Street thinks AI might be...in 4 years...Not a bubble at all...
- Baseball players cheat for tens of millions. The stakes are 2-4 orders of magnitude higher here. I'm not surprised in the least.
- Man I feel so dumb. Why haven't I been doing this in my job, if I could just see the commit that fixed my issue this would all be so easy.
by OtherShrezzing
2 subcomments
- That the answers have been available to them in the environment, and they’re still not hitting 100% on this benchmark is a damning indictment of SOTA model performance.
- Regardless of whether, during this particular evaluation, Claude 4 Sonnet looked at the solution to this particular problem in this particular git repo, this seems like a long-term intractable problem.
How can we ever perform this sort of faux-neutral agentic evaluation in an environment where we want agents to have access to the sum total of knowledge (which will necessarily include being able to learn about the evaluation being conducted and its expectations)?
by pseudosavant
0 subcomment
- If I was doing those tasks, and I found that someone had already fixed it in a future (from my git state) commit, I'd think I was being pretty smart to use that solution too.
Turns out the test shouldn't have the answers included in it?
- A friend is starting a company to do evals by just pitting models agent each other in simulations. Their teaser video is good (and humorous!)
https://kradle.ai/
by ripped_britches
2 subcomments
- Everyone on HN is like “yes I knew it! I was so right in 2021 that LLMs were just stochastic parrots!”
Strangely one of the most predictable groups of people