Hacker Newsnew | past | comments | ask | show | jobs | submit | bullen's commentslogin

This is my desktop 2025: http://ett.host.radiomesh.org/film/HL2%20on%203588%20uConsol... (controls are impossible at first on 3588 uConsole, this was me playing without the usual bindings just to prove the performance so don't judge the gameplay)

3588 (10W) plays HL2 at 300 FPS and streams it at 60 FPS to twitch.

Turns out 2025 was the year of the ARM linux desktop after all!

TWM + emacs + irssi + mpv(ytdl)


Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

Just like SSDs from 2010 have 100.000 writes per bit instead of below 10.000.

CPUs might even follow the same durability pattern but that remains to be seen.

Keep your old machines alive and backed up!


> Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

DDR5 is more reliable. Where are you getting this info that DDR3 lasts longer?

DDR5 runs at lower voltages, uses modern processes, and has on-die ECC.

This is already showing up in reduced failure rates for DDR5 fleets: https://ieeexplore.ieee.org/document/11068349

The other comment already covered why comparing CAS latency is misleading. CAS latency is measured in clock cycles. Multiply by the length of a clock cycle to get the CAS delay.


It has on-die ECC _because_ it is so unreliable. ECC is there to fix its terribleness from factory.

So? If the net result is more reliable memory, it doesn't matter.

Many things in electrical engineering use ECC on top of less reliable processes to produce a net result that is more reliable on the whole. Everything from hard drives to wireless communication. It's normal.


ECC doesnt completely fix it, it only masks problems for most common use patterns. Rowhammer is a huge problem.

Just like increasing the structure size "only" decreases the likelihood of bit flips. Correcting physical unreliability with more logic may feel flimsy, but in the end, probabilities are probabilities.

CAS latency is specified in cycles and clock rates are increasing, so despite the number getting bigger there's actually been a small improvement in latency with each generation.

Not for small amounts of data.

Bandwith increases, but if you only need a few bytes DDR3 is faster.

Also slower speed means less heat and longer life.

You can feel the speed advantage by just moving the mouse on a DDR3 PC...


While it has nothing to do with how responsive your mouse feels, as that is measured in milliseconds while CAS latency is measured in nanoseconds, there has indeed been a small regression with DDR5 memory compared to the 3 previous generations. The best DDR2-4 configurations could fetch 1 word in about 6-7 ns while the best DDR5 configurations take about 9-10 ns.

https://en.wikipedia.org/wiki/CAS_latency#Memory_timing_exam...


RAM latency doesn't affect mouse response in any perceptible way. The fastest gaming mice I know of run at 8000Hz, so that's 125000ns between samples, much bigger than any CAS latency. And most mice run substantially slower.

Maybe your old PC used lower-latency GUI software, e.g. uncomposited Xorg instead of Wayland.


I only felt it on Windows, maybe tht is due to the special USB mouse drivers Microsoft made? Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.

You are conflating two things that have nothing to do with each other. Computers have had mice since the 80s.

Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.

No it isn't, your computer is doing tons of stuff and the cursor on windows is a hardware feature of the graphics card.

Should I even ask why you think memory bandwidth is the cause of mouse latency?


Dan Luu actually measured latency of older computers (terminal, input latency), and compared it to modern computers. It shows older computers (and I mean previous century-wise old) have lower input latency. This is much more interesting than 'feelings', especially when discussing with other people.

> 100.000 writes per bit

per cell*

Also, that SSD example is wildly untrue. Especially with the context of available capacity at the time. You CAN get modern SSD's with mind boggling write endurance per cell, AND has multides more cells, resulting in vastly more durable media than what was available pre 2015. The one caveat there to modern stuff being better than older stuff is Optane (the enterprise stuff like the 905P or P5800X, not that memory and SSD combo shitshow that Intel was shoveling out the consumer door). We still haven't reached parity with the 3DXpoint stuff, and it's a damn shame Intel hurt itself in it's confusion and cancelled that, because boy would they and Micron be printing money hand over fist right now if they were still making them. Still, Point being: Not everything is a TLC/QLC 0.3DWPD disposable drive like has become standard in the consumer space. If you want write endurance, capacity, and/or performance, you have more and better options today than ever before (Optane/3DXPoint excepted).

Regarding CPU's, they still follow that durability pattern if you unfuck what Intel and AMD are doing with boosting behavior and limit them to perform with the margins that they used to "back in the day". This is more of a problem on the consumer side (Core/Ryzen) than the enterprise side (Epyc/Xeon). It's also part of why the OC market is dying (save for maybe the XOC market that is having fun with LN2), those CPU's (especially consumer ones) come from the factory with much less margin for pushing things, because they're already close to their limit without exceedingly robust cooling.

I have no idea what the relative durability of RAM is tbh, it's been pretty bulletproof in my experience over the years, or at least bulletproof enough for my usecases that I haven't really noticed a difference. Notable exception is what I see in GPU's, but that is largely heat-death related and often a result of poor QA by the AIB that made it (eg, thermal pads not making contact with the GDDR modules).


Maybe but in my experience a good old <100GB SSD from 2010-14 will completely demolish any >100GB from 2014+ in longevity.

Some say they have the opposite experience, mine is ONLY Intel drives, maybe that is why.

X25-E is the diamond peak of SSDs probably forever since the machines to make 45nm SLC are gone.


What if you overprovision the newer SSD to a point where it can run the entirety of the drive in pseudo-SLC ("caching") mode? (You'd need to store no more than 25% of the nominal capacity, since QLC has four bits per cell.) That should have fairly good endurance, though still a lot less than Optane/XPoint persistent memory.

Which tells me your experience is incredibly limited.

Intel was very good, and when they partnered with Micron, made objectively the best SSD's ever made (3DXPoint Optanes). I lament that they sold their storage business unit, though of all the potential buyers, SK was probably the best case scenario (they since rebranded that into Solidigm).

The intel X25-E was a great drive, but it is not great by modern standards and in any write-focused workload it is an objectively, provably bad drive by any standard these days. Let's compare it to a Samsung 9100 Pro 8TB which is a premium consumer drive, and a quasi mid level enterprise drive (depends on usecase, it's lacking a lot of important enterprise features such as PLP) that's still a far cry from the cream of the crop, but has an MSRP comparable to the X25-E's at launch

X25-E 64GB vs 9100 Pro 8TB:

MSRP: ~$900 ($14/GB) vs ~$900 ($0.11/GB)

Random Read (IOPS): 35.0k vs 2,200k

Random Write (IOPS): 3.3k vs 2,600k

Sustained/Seq Read (MBps): 250 vs 14,800

Sustained/Seq Write (MBps): 170 vs 13,400

Endurance: >=2PB writes vs >= 4.8 PB writes

In other words, it loses very badly in every metric, including performance and endurance per dollar (in fact, it loses so bad on performance that it still isn't close even if we assume the X25-E is only $50), and we're not even into the high end of what's possible with SSD's/NAND flash today. Hell, the X25-E can't even compare to a Crucial MX500 SATA SSD except on endurance which it only barely beats (2PB for X25-E vs 1.4PB for 4TB). The X25-E's incredibly limited capacity (64GB max) also makes it a non-starter for many people no matter how good the performance might be (but isn't).

Yes, per cell the X25-E is far more durable than a MX500 or 9100 Pro yielding a Disk Write Per Day endurance of about 17DWPD, which is very good. An Intel P4800X however (almost a 10 year old drive itself) had 60DWPD, or more than 3x the endurance when normalized for Capacity, while also blowing it - and nearly every other SSD ever made until very very recently - out of the water on the performance front as well. And let's not forget, not only can you supplement per-cell endurance with having more cells (aka more capacity), but the X25-E's maximum capacity of 64GB makes it a non-starter for the vast majority of use-cases right out of the gate, even if you try to stack them in an array.

For truly high end drives, look at what the Intel P5800X, Micron 9650 MAX, or Solidigm D7-5810 are capable of for example.

Oh, and btw, a lot of those high end drives have SLC as their Transition Flash Layer, sometimes in capacities greater than the X25-E was available in. So the assertion that they don't make SLC isn't true either, we just got better about designing these devices so that we aren't paying over $10/GB anymore.

So no. By todays standards the X25-E is not "the diamond peak". It's the bottom of the barrel and in most cases, non-viable.


My experience is 10 drives from 2009-2012 that still work and 10 drives from 2014 that have failed.

Yes, we've already established your experience is incredibly limited and not indicative of the state of the market. Stop buying bad drives and blaming the industry for your uninformed purchasing decisions.

Hell, as you admitted that your experience is limited to intel, I'd wager at least one of those drives that failed were probably the 660P's, no? Intel was not immune from making trash either, even if they did also make some good stuff (which for their top tier stuff, was technically was mostly Micron's doing).

I've deployed countless thousands of solid state drives - hell well over a thousand all-flash-arrays - that in aggregate probably now exceeds an exabyte of raw capacity since. This is my job. I've deployed individual systems with more SSD's than you've owned in total from the sound of it. And part of why it's hard to kill those old drives is they are literal orders of magnitude slower, meaning it takes literal orders of magnitude more time to write the same amount of data. That doesn't make them good drives, it makes them near-worthless even when they work, especially considering the capacity limitations that come with it.

I'm not claiming bad drives don't exist, they most certainly do, and would consider over 50% of what's available in the consumer market to fit that bill, but I also have vastly higher standards than most, because if I fuck something up, the cost to fix it is often astronomical. Modern SSD's aren't inherently bad, they can be, but not necessarily so. Just like they aren't inherently phenomenal, they can be, but not necessarily so. But they do exist, at a variety of price points and use-cases.

TL;DR Making uninformed purchasing decisions often leads to bad outcomes.


CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).

Old machines use a lot more power (worse nm), and DDR5 has equivalent to ECC, while previously you had to specifically get ECC RAM and it wouldn't work on cheaper Intel hardware (bulk of old hardware is going to be Intel).

The on-chip ECC in DDR5 is there to account for lower reliability of the chips themselves at the higher speeds. It does NOT replace dedicated ECC chips which cover a whole lot more.

Seems this generates some sort of shim that calls source 2 dynamic lib.


Ok, so all bits have to be rotated, even when powered on, to not loose their state?

Edit: found this below: "Powering the SSD on isn't enough. You need to read every bit occasionally in order to recharge the cell."

Hm, so does the firmware have a "read bits to refersh them" logic?


Kind of. It's "read and write back" logic, and also "relocate from a flaky block to a less flaky block" logic, and a whole bunch of other things.

NAND flash is freakishly unreliable, and it's up to the controller to keep this fact concealed from the rest of the system.


I concur; in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead, started dying after 5 years, last one died 9 years later. Around 10 drives in each group. All older drives are below 100GB (SLC) all never are above 200GB (MLC). I reverted back to older drives for all my machines in 2021 after scoring 30x unused X25-E on ebay.

The only MLC I use today are Samsungs best industrial drives and they work sort of... but no promises. And SanDisc SD cards that if you buy the cheapest ones last a surprising amount of time. 32GB lasted 11-12 years for me. Now I mostly install 500GB-1TB ones (recently = only been running for 2-3 years) after installing some 200-400GB ones that work still after 7 years.


> in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead,

As a counter anecdote, I have a lot of SSDs from the late 2010s that are still going strong, but I lost some early SSD drives to mysterious and unexpected failures (not near the wear-out level).


Interesting, what kind where they? Mine where all Intel.


Yes, but that tradeoff comes with a hidden cost: complexity!

I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB.

The spread functions that move bits around to even the writes or caches will also fail.

The best compromise is of course to use both kinds for different purposes: SLC for small main OS (that will inevitably have logs and other writes) and MLC for slowly changing large data like a user database or files.

The problem is now you cannot choose because the factories/machines that make SLC are all gone.


The problem is now you cannot choose because the factories/machines that make SLC are all gone.

You can still get pure SLC flash in smaller sizes, or use TLC/QLC in SLC mode.

I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB.

It's more like 1TB of SLC vs. 3TB of TLC or 4TB of QLC. All three take the same die area, but the SLC will last a few orders of magnitude longer.


SLC are produced, but the issue is that there is no (I'm aware of) SLC products for consumer market


My problem is: I have more than 64GB of data


The key takeaway is that you will rebuild the drivers less often:

1) The stack is mature now, we know what features can exist.

2) For me it's about having the same stack as on a 3588 SBC, so I don't need to download many GB of Android software just to build/run the game.

The distance to getting a open-source driver stack will probably be shorter because of these 2 things, meaning OpenVR/SteamVR being closed is less of a long term issue.


I'm confused. Why would you develop a game on a SBC (that's not powerful enough to do VR)? Why are you not just cross compiling?

It's possible that you can have a full open source stack some day on these goggles.. but I don't think that's something that's obviously going to happen. SteamVR sounds like their version of GooglePlay Services


3588 can do VR, just not Unity/Unreal VR. That is a problem with bloated engines not the 3588.

All mainstream headsets get open-source drivers eventually: https://github.com/collabora/libsurvive


yeah but is foveated streaming and whatnot going to be opensource, or are we going to have to wait a decade for some grad student to reimplement a half broken version?


Probably, but eye traction is never going to be the focus of indie engines specially if they run on the 3588.

Also about cross compiling that is meaningless as you need hardware to test on and then you should be able to compile on the device you are using to test. Alteast that is what I want, make devices that cannot compile illegal.


*tracking


Yep, I'm back into VR with this move, specially if the price is closer to $500 than $1000.

Unless the lenses/displays are bad, but I figure we would have heard by now?


It's about the database, you need to make your own database!

As long as your bottom dependency is fixed, you cannot progress!

http://root.rupy.se


Or you just use SMTP and read the 200 response on the SEND?


In extension to that spirit, some SPAM could be eliminated, if more people would turn address verification on in their SMTP servers, which makes the delivery peers symmetric.


Do you mean source or destination address verification or both?

Source address verification doesn't really mean anything (no-reply@example.co.uk) and destination verification is obvious and as far as I am aware pretty much no-one doesn't do it already.

"delivery peers symmetric" - what does that mean?


With source address verification (and server validation) it is guaranteed that the mail comes from the server that controls the senders mail address and that this address does indeed exist. With symmetric I mean that both servers then resolve each other the same, both check whether their side of mailbox exists and they share the time during which this happens, so you can't use it for DOS, since it takes your time as well.


You send me mail with noreply@example. I go to your MX to see if noreply@example will receive mails. If not you are spamming.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: