- GeekBench probably made the right choice to optimize for more realistic real-world workloads than for the more specific workloads that benefit from really high core counts. GeekBench is supposed to be a proxy for common use case performance.
High core count CPUs are only useful for specific workloads and should not be purchased as general purpose fast CPUs. Unless you’re doing specific tasks that scale by core count, a CPU with fewer cores and higher single threaded throughput would be faster for normal use cases.
The callout against the poor journalism at Tom’s Hardware isn’t something new. They have a couple staff members posting clickbait all the time. Some times the links don’t even work or they have completely wrong claims. This is par for the site now.
To be fair, the Tom’s Hardware article did call out these points and the limitations in the article, so this SlashDot critique is basically repeating the content of the Tom’s Hardware article but more critically https://www.tomshardware.com/pc-components/cpus/apples-18-co...
- You're seriously posting to HN a link to your Slashdot post linking to your year-old blog post complaining about Geekbench 6's multi-threaded test without ever mentioning Amdahl's Law?
Pretending that everything a CPU does is an embarrassingly parallel problem is heinous benchmarking malpractice. Yes, Geekbench 6 has its flaws, and limitations. All benchmarks do. Geekbench 6 has valid uses, and its limitations are defensible in the context of using it to measure what it is intended to measure. The scalability limitations it illustrates are real problems that affect real workloads and use cases. Calling it "broken" because it doesn't produce the kind of scores a marketing department would want to see from a 96-core CPU reflects more poorly on you than it does on Geekbench 6.
- The real meat from the article: https://dev.to/dkechag/how-geekbench-6-multicore-is-broken-b...
First plot really says it all.
- Anyone who treats Geekbench as a meaningful benchmark (i.e. not without a huge disclaimer or with other more meaningful datapoints) is not to be trusted. You can only really trust it for inter-generational comparisons within a single architecture.
by thot_experiment
1 subcomments
- "When you measure, include the measurer" - MC Hammer
- TIL: Slashdot still exists. And it looks exactly as horrible as 20 years ago.
- The strategy is to make outlandish claims and then have people "engaging" to "disprove" all of the claims. This strategy works as long as people are too apathetic and/or stupid to hold liars accountable. It works currently because journalism has significantly less value than tabloid drama to many people, some of which are just narrative shopping for a fun curated list of ideas (not facts) that fit their personalized echo chamber.
- I'm really confused by the self-aggrandizing here, muddying up the discussion
how good is the M5 Max in comparison to a 96-core threadripper? what's the tl;dr, where are the broader assortments of benchmarks
I just want to see some bargraphs that say "lower is better" or "higher is better"
by phoronixrly
1 subcomments
- [flagged]
by eointierney
0 subcomment
- [flagged]
by eointierney
0 subcomment
- [flagged]
- people already "destroy" the many-core threadrippers with gaming-oriented ryzens on appropriately suited workloads, this is clickbait