Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A ratio limit is a hueristic.

There is an upper-limit to how much information you can compress into a given space. (Note that we may want to write a pathological program that is very small and allocates a lot of information-free memory. But that's not decompression.)

If we accept the premise then we can look at another approach to solving this problem, once and for all! I like examining memory allocation because it's so general. But there may be another way. We can examine the input to estimate compression ratio.

The problem here is that image decompression is apparently giving strangers the ability provide an arbitrary N and say "Please loop N times and/or allocate N bits". A modern CPU/ram is overwhelmed by an N around 30 bits (and for many workloads even a 20-bit N is too much). This is a root cause of many problems! You know, I'm going to go out on a limb here and make a bold assertion: I assert there is a very safe upper bound on the decompression ratio, and that for any real algorithm you can indeed examine the input to determine whether N exceeds your allowable threshold. 10x might be a bit low (although I doubt it) so let's be generous and say 100x. (Which seems crazy. Nothing that I know of, not even text, compresses that well.) This means that I believe that any image format, for example, has a trivially calculable N (for example, width*height in pixels). I would argue that in the general case (unless you are doing some sort of compsci research) the image file should be related to N. That is if the image is 10 bits wide, 10 bits high, we should expect a roughly 20bit file-size.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: