I wish someone like the creator of Curiosity Descent footage ( http://www.youtube.com/watch?v=Esj5juUzhpU ) could get their hands on the three separate RGB channels and combine them manually. The frames did not seem to be overlaid properly and combining 1R + 2B + 3G as 1F, 2B + 3G + 4R as 2F etc. caused offsets in every frame.
Hi - the creator, Bard Canning here. That's actually a really great idea. I've emailed them about it. But, I'm not going to hold my breath. They don't appear to know much about post-production enhancement methods, etc. The "advanced" post-processing by the company shown involved nothing more than rendering the frames as a movie. They appeared to have done nothing at all to stabilize the individual RGB frames or to correct the histogram or color balance.
I wouldn't want to attempt this by de-converting the video footage - it would just be too ugle and low-resolution. But here's hoping that they allow me access to the original frame scans.
I agree that doing some more advanced post-processing on this would be great, seems the quality could be a lot better. Perhaps look to the restoration of 'A trip to the Moon' for inspiration of what might be possible.
What are you the creator of, by the way? The video clip? Good luck with obtaining the frame scans.
EDIT: OK I see you are the creator of the curiosity clip. Amazing work! Hope you can pull this off, I would pitch in on Kickstarter.
That Curiosity descent video was amazing. Here's an even better side-by-side with "director's cut" captions explaining what's being done: http://youtu.be/pjeHZ9poew4
Thanks for sharing. But in answer to your question - the RGB channels are there - you can download the h264 vid and decompose (sure, you'll some quality, but that's not the real blocker). The real issue is the time - if you believe him, the Mars Curiosity guy says it took him a month to pull that off: "It took 29 days from start to finish, working full-time on it for the last week. This was the most laborious media project that I’ve ever done. But I don't regret a minute of it.
", so you'd have to convince him to do it..
But you can probably PM him on Reddit if you're serious enough ;)
Great job! really amazing what you could extract from that footage!. There is only one thing that I find could be improved a bit ( I am no expert by the way, just a video aficionado): at 1´50" when the jet from the rockets hits the floor, you can clearly see that in the original footage but you can not in your version till 4 or 5 seconds later, but the impressive first burst is lost(I imagine that is a trade off of the morphing).
Any way really impressive work! Nasa must hire you!
So you 'only' applied 2D interpolation, if I slightly understand what motionflow is, and it restored so much information ? I thought it was a 3D aware superresolution algorithm like I've seen used for space probes.
That's actually not a bad idea... the footage does appear to be quite long - and with having to stabilize the three RGB channel frames separately it would take three times as long.
I would probably have to put aside a significant chunk of time in order to get it done.
People have been so amazing and supportive of the Curiosity video - I might just give it a try!
It would certainly be a big project - but wouldn't it be amazing to make the first ever color movie crisp, clear and stable!
I certainly hope the project gets back to you - if they don't, you could kick it up a notch by writing a quick blog post or something describing who you are, what you want to do, and getting some social media publicity spun up. I can see the Reddit headline, "I'm the guy who restored the Curiosity descent video and I want to restore the world's first color film" - karma gold.
If you did start a Kickstarter campaign, I guess you wouldn't really have any swag like shirts, early copies, etc. like they usually offer. But it shouldn't be a problem, I really don't think you would have any trouble at all getting funding, considering your reputation and just how cool this would be.
I think much of the color bleeding is due to the colors being shot at different times. Prokudin-Gorskii images have color bleeding and most of the subjects photographed are static and still. Motion interpolation and color enhancement is a bit different than merging the RGB channels from different moments in time back together. It's similar to deinterlacing fields of video together to make a progressive frame. Results can vary and artifacts will probably still exist when there is a lot of motion.
I guess one could attempt to get a luma(greyscale) version of the film together using the different channels and then attempt to fill in color using approximations. Sounds like a lot of work.
"To license the use of images or clips, please contact Science and Society Picture Library."
That's unfortunate. If they released the raw digital data to the public, I bet someone out there would take it on themselves to restore it much further.
One should also not discount the possibility of emulsion damage. There might just be some damage to the light-sensitive material or the original substrate.
Funny: while the Macaw and the director's children were in brilliant color, the panorama of London was black-and-white. Why? Because London was black-and-white. Black cars, black clothes, grey stone buildings. Not the best subject for a color film, but interesting historically.
Also: not only first color film, but first color-film director, actors.
Am I misunderstanding something or is the animation in the video clip at around 3:00 wrong? There it looks as if a single frame is exposed to all three colors sequentially, while really the color disk should be rotating in a way that ensures that a frame always represents the same color while moving?
The animation is incorrect: The color wheel has the colors "moving" up and the film moving down. When a red frame enters from the top, it should be projected as red for all three of its appearances on the screen.
If you watch the explanatory video, you see that successive frames were subject to different filters. A spinning color filter (blue, green, red) ran in front of the film, and within each set of three frames, you had successive shots filtered for blue, green, and red. In any scene in which there was motion of either the subject or the camera, you'll then see a slight mis-registration of each of the three component colors. It's an artifact of the process.
Modern film works by combining all three colors on a single spool of film and bringing out the color through the development process. An alternate method would be to shoot with three separate lenses onto three separate frames, though you'd then get parallax color artifacts (as with the Prokudin-Gorskii process, posted to HN recently).
This film is very different from interlaced video. Interlaced video fields (the two parts of a single frame) are captured at different points in time. This film has frames captured at different points in time. The similarity ends here.
With interlaced video, the two fields make a single frame. With this film, every frame is displayed three times: First, with the two frames that precede it; next, with the one that precedes it and the one that follows; lastly, with the two that follow. Every frame (with the exception of the beginning and the end) would be displayed three times.
Now, we can certainly create our own "full color" frames by successively combining every three frames (1-2-3; 2-3-4; 3-4-5; 5-6-7; e-t-c) but now, I believe, we either have to have some intelligence in software to find parts from other frames that can be moved and morphed to align with preceding frames; or we need a human to manually move and morph these bits of each of our new "frame."
Not precisely the same as converting interlaced to progressive scan.
Ok, the frame construction is different. But the video processing issue is the same. Captures from different points in time need to be integrated to avoid the edges blurring (or feathering with interlacing). Signal processing algorithms attempt to do feature and edge detection and realign them in the composite frame.