There isn't really much benefit of using chunked encoding for streaming video, if you're not generating the video on the fly server-side.
If you are simply streaming a stored video, you can just get the file size, use that as the content-length, and send the video. If the content is in a streamable format the client can just read/play the data as it is received. Chunked encoding will in fact add a little overhead to just sending as is.
Chunked encoding is more apt for situations where you don't know beforehand how much data you are going to send.
Yes, this is how Move Network's streaming works, how Apple's streaming works, and Adobe just announced official support for streaming this way last week[1] (edit: oh and microsoft buit it into iis too).
What they do is parallel-encode their video feed to many different bitrates, then as the client is keeping up or not keeping up with the incoming chunks they move up or down the scale of which chunks to send. The best part is, it works via existing http cdn's like level3 and akamai, so just about anybody can stream live video this way to as many people on the internet as they can get to watch it for the already commodity cost of cdn bandwidth.
This is a common misconception. In fact, HTTP Live Streaming as proposed by Apple has nothing to do with chunked encoding. The former splits the video into multiple "chunk" files, using an m3u file as a playlist. Stream variants are supported by having multiple sets of video and m3u files, but dynamically switching between them based on bandwidth is left to the client.
Not really. Streaming video tends to use byte-range requests (introduced in HTTP 1.1) because you want be able to to drag the video play head/cursor to an arbitrary location. For example, iOS uses this mechanism.