Hacker Newsnew | past | comments | ask | show | jobs | submit | more gforce_de's commentslogin

It does not match with the reference implementation in my side:

  #!/bin/sh
  # originally from https://jpegxl.info/images/precision-machinery-shapes-golden-substance-with-robotic-exactitude.jpg
  # URL1="http://intercity-vpn.de/files/2025-10-04/upload/precision-machinery-shapes-golden-substance-with-robotic-exactitude.png"
  # URL2="http://intercity-vpn.de/files/2025-10-04/upload/image-png-all-pngquant-q13.png"
  curl "$URL1" -so test.png
  curl "$URL2" -so distorted.png
  
  # https://github.com/cloudinary/ssimulacra2/tree/main
  ssimulacra2 test.png distorted.png
  5.90462597

  # https://github.com/gianni-rosato/fssimu2
  fssimu2 test.png distorted.png
  2.17616860


Hi, author here – the README covers this in the Performance section: https://github.com/gianni-rosato/fssimu2?tab=readme-ov-file#...

If you run the `validate.py` script available in the repo, you should see correlation numbers similar to what I've pre-tested & made available in the README: fssimu2 achieves 99.97% linear correlation with the reference implementation's scores.

fssimu2 is still missing some functionality (like ICC profile reading) but the goal was to produce a production-oriented implementation that is just as useful while being much faster (example: lower memory footprint and speed improvements make fssimu2 a lot more useful in a target quality loop). For research-oriented use cases where the exact SSIMULACRA2 score is desirable, the reference implementation is a better choice. It is worth evaluating whether or not this is your use case; an implementation that is 99.97% accurate is likely just as useful to you if you are doing quality benchmarks, target quality, or something else where SSIMULACRA2's correlation to subjective human ratings is more important than the exactness of the implementation to the reference.


Thank you for clarifing this, it was a misread on my side. The overall percentage deviation from the reference implementation is marginal, but just the pure existance of 'validate.py' looked to me like it must match.


Quick follow-up from the original SSIMULACRA2 author:

> The error will be much smaller than the error between ssimu2 and actual subjective quality, so I wouldn't worry about it.


You can build the image yourself, but have to switch off some packages or features - otherwise the image (linux-kernel + tools) is just too large or consumes too much memory. The original router has 8 megabytes RAM-memory and 2 Megabytes flash ("storage"). You can boot a recent kernel 6.16.5, but with 8mb there is not much left to work with 8-)

A starter is here: https://intercity-vpn.de/files/openwrt/wrt54gtest/minimal/


I remember swapping the TSOP packages on a WRT54 to double the RAM.

Here's a blog post about this, not sure if it was the same one I followed:

https://blog.thelifeofkenneth.com/2010/09/upgrading-ram-in-w...


Why is there no support for compression?

  $ URL=https://herman.bearblog.dev/
  $ curl -v -H 'Accept-Encoding: deflate, gzip, br, zstd' $URL 2>&1 | grep --text ^'< Content-Encoding:\|< Content-Length:\|> Accept-Encoding:'
> Accept-Encoding: deflate, gzip, br, zstd ...


Interesting idea, but usually a JSON payload is compressed with brotli anyway.

It seems, the computational overhead is not worth it?


I really needs more benchmarks, especially decompression time. Also the sizes are interesting for very small images, but for real images, there are maybe better lossy variants:

  nz_scene - PEP = 73.542 bytes,
       lossy-PNG = 43.557 bytes,
      lossy-WEBP = 26.654 bytes,
  lossy-mozcjpeg = 15.716 bytes
So it's not about filesize here, it must be decompression speed.


The creator says that the PEP image format is meant for small, limited colour images and of course it does lossless compression.


Thanks for making that clear. But is it worth the hassle?

https://nigeltao.github.io/blog/2021/fastest-safest-png-deco...

PNG decoding seems to be fast enough:

  tree1    - PEP =  0.412 ms PNG = 0.25 ms
  font     - PEP =  0.602 ms PNG = 0.663 ms
  nz_scene - PEP = 32.121 ms PNG = 3.069 ms
Anyway, PEP is interesting!


It's a lossless format optimized for file size rather than decompression speed; the README seems clear enough. Made by a pixel art game dev, for compressing sprites in pixel art games, so I assume it fits a useful niche.


I don't see any hassle, really. It's just another image format: good for some use cases, bad for others. No one file format is perfect. It was interesting enough for me to give it a couple of hours to implement a cli and add support to my pixel app.


I'am also a bit shocked by this SDK approach, why not a simple API where you upload a file, get an ID and wait till it's done? Beside that, sometimes it works, sometimes not:

  {
      "request_id": "9622a21f-37bf-4404-ac84-8728977a5272",
      "status": "ANALYZING",
      "score": null,
      "models": [
          {
              "name": "rd-context-img",
              "status": "ANALYZING",
              "score": null
          },
          {
              "name": "rd-pine-img",
              "status": "ANALYZING",
              "score": null
          },
          {
              "name": "rd-oak-img",
              "status": "ANALYZING",
              "score": null
          },
          {
              "name": "rd-elm-img",
              "status": "ANALYZING",
              "score": null
          },
          {
              "name": "rd-img-ensemble",
              "status": "ANALYZING",
              "score": null
          },
          {
              "name": "rd-cedar-img",
              "status": "ANALYZING",
              "score": null
          }
      ]
  }


The minified version needs ~51 kilobytes (16 compressed):

  $ curl --location --silent "https://unpkg.com/htmx.org@2.0.4" | wc -c
  50917
  
  $ curl --location --silent "https://unpkg.com/htmx.org@2.0.4" | gzip --best --stdout | wc -c
  16314


see fixi if you want bare-bones version of the same idea:

https://github.com/bigskysoftware/fixi


For 1500 passengers and 30 MWh (i expect they dont arrive 100% empty) this is 20kWh per person. If they travel 80 kilometers, thats 0.25 KWh/person/kilometer. Sounds OK to me (all values are guessed).


It is interesting to see how that compares to the much smaller Candela P-12 ferry shuttle. Candela is a an electric hydrofoil.

30 passengers, 336 kWh usable at max range, gives 11 kWh per person. Max range 72 km (40 nautical miles), 0.16 kWh/person/km.

https://candela.com/pro-series/p-12-shuttle/


Cats can be close to foiling. I suspect in this case the cargo has huge effect too.


My first reaction too, but...


You have not seen it, but there are vendors selling such stuff since ~20 years. Google for linux + hardware + cpu hotplug or memory hotplug. The PCI bus helps here.


I have seen it. Yes, they technically exist. Nobody buys them though.

They are ridiculously expensive. Their use-cases in modern compute is a rounding error towards zero. We just don't build computers like that anymore, for good reason. Memory and CPU rarely fail, and when they do fail, they fail the entire box and just replace it. In 99.99% of all cases it's cheaper and easier to do it that way.

There are vanishingly small use-cases where it makes sense to do hotplug CPU/Memory. They charge accordingly.

Like I said in my parent comment, virtually nobody needs uptimes measured in literal decades. If you are in the .01%(rounded up) of compute that actually needs that, the chances of needing to do it with x86 is even smaller.

One example are the VISA and Mastercard payment processing platforms. The way they are designed requires 24/7 literal decades of uptime. When they have partial outages, they make international headlines and end up writing letters like this: https://www.parliament.uk/globalassets/documents/commons-com...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: