Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my analysis, only Gracemont (Atom 5th gen) is good and that’s for two reasons:

1. It has roughly the IPC of Skylake. 2. It’s the first Atom with AVX2. Even in 2021 Intel released brand new Atom cores without AVX2, which was introduced with Haswell (Haswell New Instructions). And in most tests, merely enabling AVX2 will yield 10% higher performance. That’s why Clear Linux has higher performance as they’re all tuned for Skylake. That’s also the reason why Red Hat couldn’t target x86-64-v3 with AVX2 for their new releases.

All previous Atom cores are slower than you imply:

Tremont (Atom 4th gen) has the IPC of Sandy Bridge. Goldmont (Atom 3rd gen) has the IPC of Core. It’s not exceeding Core like you imply. And Atom first gen is dog slow. It is like running a Pentium II/III CPU or a first generation Raspberry Pi. I find them an unsuccessful attempt of Intel of trying to create a mobile chip. In essence they just brought back a Pentium era chip.

This new Atom is fantastic though.



No, it’s really doing OK for what it is. A Xeon Bronze 3104 is a 6C6T 1.7 GHz Skylake that gets a 88W tdp and a J5005 is a 2.8 GHz 4C4T Goldmont running in 15W. That is 10.7 core-cycle-units for the Skylake and 11.2 core-cycle-units for the atom. The Skylake is not really all that far ahead - maybe 50% ahead on average in this suite, so the J5005 is running about 2/3 of Skylake IPC. I assume c-ray is probably using AVX there? And the J5005 still does fine.

There is obviously a dearth of actual practical benchmarks of real tasks apart from STH and a few others, but if you look at Passmark or other generic benchmarks, like I said, it’s certainly faster than core2quad enthusiast desktop, which is completely reasonable and even impressive given its power budget. Actually in Passmark it’s even faster than a full Nehalem Core i7 with SMT, it actually works out just a bit under a Sandy Bridge i5 of similar clock - so I was a bit conservative there. And Passmark corroborates my “3104 is about 50% faster” guesstimate.

(UserBenchmark’s actual measurements - not the speed rating - is likely more accurate than Passmark but I know I’ll get my head bitten off for it!)

https://www.servethehome.com/intel-pentium-silver-j5005-benc...

https://www.cpubenchmark.net/compare/Intel-Pentium-Silver-J5...

(Do remember that Intel progress through that era wasn’t really as bad as people say… I’ve seen people say “5% a generation” and the only generation close to that low was Broadwell. Skylake is a ton faster than Sandy Bridge clock for clock, and J5005 still does fall a bit below Sandy Bridge. There’s your two generations of progress since Goldmont - Atom went from below Sandy Bridge to matching Skylake during the Tremont/Gracemont generations.)


Userbenchmark.com is worse than useless. It’s straight up lies.


And like I said, I knew I was gonna get my head bitten off for that. This is where you distinguish the bandwagon hangers-on from the people actually interested in technical discussion.

The commentary UserBenchmark provides is terrible and the “effective speed” composite rating is terrible. The actual int/fp benchmarks are perfectly fine and actually tend to be more reflective of actual benchmarks like spec than Passmark has been, in my experience. Passmark does sometimes have oddities, UserBenchmark hasn’t.

But bandwagoners can’t accept that, they just see “UserBenchmark” and their vision goes red and they shake as they struggle to type out “nobody should EVER use UserBenchmark”. I do apologize for being colorful but that’s how it is.

Like I said, there’s a reason I didn’t link it, no matter what you say, there’s a significant number of people who just can’t act mature enough to handle separating the data from the editorial positions.

It would be nice if we had good Spec2017 benchmarks for everything. I would prefer that. If you want broad comparisons of completely obscure hardware “what does this ultra-budget Xeon Bronze look like against this atom vs a desktop processor from 2007” the options are limited. You have Passmark, you have UserBenchmark, and you have Geekbench. And people still lose their shit over Geekbench too.

Even Passmark has only seen that xeon a total of 14 times.

The situation is even worse for GPUs where UserBenchmark is the only reasonably broad benchmark database available for truly ancient stuff that 3Dmark won’t run on. What if you want to compare a GTX 9800 to a Vega 7 and a Intel HD 605? It’s UserBenchmark or nothing. It sucks but it is what it is, and people need to just grow up. Including the owner of UserBenchmark. But the effective speed is just a composite benchmark and he’s never actually tampered the underlying int/fp benchmarks.

Phoronix is also not particularly good and has some serious methodological problems too. But nobody else does linux benchmarks apart from STH and Phoronix, and Phoronix pumps them out like crazy and just survives on volume even if his testing is kinda shit. Look at his “Linux games” test and he’s mixing multiple resolutions into a single result set, including the same game multiple times, etc, and then pulling out averages, everything is framerates and not frametimes (meaning, averages and not minimums), etc. From what I’ve heard his application testing isn’t any better but I don’t remember specifics.

Just goes to show: you can be scum and put out good data, and you can be a pillar of the community and put out bad data. Reddit tone police don’t change that.

There is a awful lot of incredibly useful data in the world that came from people who are extremely disagreeable on a personal level - or much, much worse.


Yeah life’s too short to see what part of lies are true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: