I'm very impressed, I was waiting for an image codec that combines something like VGG loss + GANs! (Another thing that I'm waiting for is a neural JPEG decoder with a GAN, which would be backwards compatible with all the pictures already out there!) Now we need to get some massive standardisation process going to make this more practical and perfect it, just like it was done for JPEG in the old days! (And then do it for video and audio too!)
What happens if you compress a noisy image? Does the compression denoise the image?
On the standardization issue: the advantage of such a method that we presented is that as long as there exists a standard for model specification, we can encode every image with an arbitrary computational graph that can be linked from the container.
Imagine being able to have domain specific models - say we could have a high accuracy/precision model for medical images (super-close to lossless), and one for low bandwidth applications where detail generation is paramount. Also imagine having a program written today (assuming the standard is out), and it being able to decode images created with a model invented 10 years from today doing things that were not even thought possible when the program was originally written. This should be possible because most of the low level building blocks (like convolution and other mathematical operations) is how we define new models!
On noise: I'll let my coauthors find some links to noisy images to see what happens when you process those.
What happens if you compress a noisy image? Does the compression denoise the image?