Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do any of these newer/experimental schemes, such as this one, take into account other factors such as CPU load before declaring themselves as "better". For example this project seems pretty cool, but there's no data on how CPU bound, memory bound, I/O bound its decompression algorithm is.

I guess what I'm asking is, if I hit a web page with 20 images @ 100k per image is it going to nail one or more cores at 100% and drain the battery on my portable device. Fantastic compression is great but what are the trade offs?



New codecs almost always use more CPU at first because they have to do pure-software decoding. However, if the bandwidth savings are good enough, and usage is ubiquitous enough, eventually the new format will be implemented in hardware decoding chips, which will bring power usage back down.

This is most noticeable in video formats; older devices only have MPEG-1/MPEG-2/MJPEG encoders/decoders (imagine a $20 DVD player or a old digital camera), whereas newer devices can do H.264 and/or VP9 encoding/decoding (new iPhone, new Smart HDTV).


From the page,

> Encoding and decoding speeds are acceptable, but should be improved

From elsewhere:

https://news.ycombinator.com/item?id=10318161

though the above is quite old.


Not trying to be obtuse, but that's just a subjective measure of how quick or not their algo is. It doesn't address (nor in the linked HN discussion) how efficient it is in terms of burning up CPU and battery.


The link has more concrete (non-subjective) timings.


If it takes a long time, it probably means that there is a lot to calculate. If theres a lot to calculate, probably means the cpu is running full speed to get through it


Oh sure, I understand that, but many compression articles often focus just on compression ratios (such as this one seems to do), but have no mention of the tradeoffs to obtain these results, or compare themselves against existing and well establish algorithms.

That was the point of my post.


Ah, my intention radar was off. I assumed you were making an observation but didnt jump to the probable conclusion


It says very clearly a number of times that it's better in terms of compression ratio.


Reason why your downvoted is because compression can often add to computation. An exanple would be

- I have a pallet of bytes, this will cause colors to be stored in a 8 bit integer instead of a 32 bit one (8 for r,b,g,a) - every color now adds alook up to that memory address

- I turn every color sequence possible into a numerator + denominator pair < 256 when possible. I add a length and offset to define how to compute it - when you reach a sequence, you must calculate the number. Find the offset (up to 256) and until the length (ideally > than 4 bytes) is reached get the value of that digit.

These types of calculations seem small, and likely are more often than not. But you add enough of these things up and all of a sudden the cpu must hit 100% for the time of 30+ images


I don't care about my score. I could have guessed why though. People not reading things properly. I was taking point with the observation that the site said it was better. I pointed out that it didn't say that, just that the compression ratio was better.


> People not reading things properly

I did read the article twice and thoroughly, all I sawmentioned was "Encoding and decoding speeds are acceptable, but should be improved" . That doesn't address the points I raised in my original post if you were to read it properly.


It says:

"Encoding and decoding speeds are acceptable, but should be improved"

But that doesn't address resource usage such as CPU, battery.


It's hard for a single author/developer to fully explore solution space.

For example, it could be that Javascript acquires special primitives to decode these images, or that they be a common Javascript engine extension, with a polyfill fallback.

It could be that these extensions make effective use of CPU-specific instructions, or that the underlying hardware contains special silicon to decode these images, similar to how there are H.264 decoder chips.

It's therefore possible to do the power consumption analysis, but the results would not really indicate something fundamental about the algorithm but only its current implementation. People are willing to work on improved implementations if there are other factors, like smaller size, which suggest it's worthwhile to investigate.


Appreciate the time spent answering this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: