If you run the `validate.py` script available in the repo, you should see correlation numbers similar to what I've pre-tested & made available in the README: fssimu2 achieves 99.97% linear correlation with the reference implementation's scores.
fssimu2 is still missing some functionality (like ICC profile reading) but the goal was to produce a production-oriented implementation that is just as useful while being much faster (example: lower memory footprint and speed improvements make fssimu2 a lot more useful in a target quality loop). For research-oriented use cases where the exact SSIMULACRA2 score is desirable, the reference implementation is a better choice. It is worth evaluating whether or not this is your use case; an implementation that is 99.97% accurate is likely just as useful to you if you are doing quality benchmarks, target quality, or something else where SSIMULACRA2's correlation to subjective human ratings is more important than the exactness of the implementation to the reference.
Thank you for clarifing this, it was a misread on my side.
The overall percentage deviation from the reference implementation is marginal,
but just the pure existance of 'validate.py' looked to me like it must match.
You can build the image yourself, but have to switch off some packages or features - otherwise the image (linux-kernel + tools) is just too large or consumes too much memory. The original router has 8 megabytes RAM-memory and 2 Megabytes flash ("storage"). You can boot a recent kernel 6.16.5, but with 8mb there is not much left to work with 8-)
I really needs more benchmarks, especially decompression time.
Also the sizes are interesting for very small images, but for
real images, there are maybe better lossy variants:
It's a lossless format optimized for file size rather than decompression speed; the README seems clear enough. Made by a pixel art game dev, for compressing sprites in pixel art games, so I assume it fits a useful niche.
I don't see any hassle, really. It's just another image format: good for some use cases, bad for others. No one file format is perfect. It was interesting enough for me to give it a couple of hours to implement a cli and add support to my pixel app.
I'am also a bit shocked by this SDK approach, why not a simple API
where you upload a file, get an ID and wait till it's done?
Beside that, sometimes it works, sometimes not:
For 1500 passengers and 30 MWh (i expect they dont arrive 100% empty) this is 20kWh per person. If they travel 80 kilometers, thats 0.25 KWh/person/kilometer. Sounds OK to me (all values are guessed).
You have not seen it, but there are vendors selling such stuff since ~20 years.
Google for linux + hardware + cpu hotplug or memory hotplug.
The PCI bus helps here.
I have seen it. Yes, they technically exist. Nobody buys them though.
They are ridiculously expensive. Their use-cases in modern compute is a rounding error towards zero. We just don't build computers like that anymore, for good reason. Memory and CPU rarely fail, and when they do fail, they fail the entire box and just replace it. In 99.99% of all cases it's cheaper and easier to do it that way.
There are vanishingly small use-cases where it makes sense to do hotplug CPU/Memory. They charge accordingly.
Like I said in my parent comment, virtually nobody needs uptimes measured in literal decades. If you are in the .01%(rounded up) of compute that actually needs that, the chances of needing to do it with x86 is even smaller.
One example are the VISA and Mastercard payment processing platforms. The way they are designed requires 24/7 literal decades of uptime. When they have partial outages, they make international headlines and end up writing letters like this: https://www.parliament.uk/globalassets/documents/commons-com...