Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The context of that is that the Netflix GUI and the video are being composited together before 3D rendering happens. That's a surprising place to find subtitles, to me. Why are you putting them on the virtual screen? Why not let them float in space in front of your eye? They should be where the sound is, not where the picture is.


I agree conceptually, but our eyes can't focus on 2 things at once when they're at different distances (or appear to be). By putting the 2D screen at the same place in 3D as the subtitles, it'll be less tiring on the eyes.


I think subtitles are supposed to be placed at particular places on the screen.


That's making the assumption that we only have a static screen to place them into. In VR, we have the entire environment around the screen and potential for a HUD locked to the user's eyes in space. It would be cool to try different implementations and see which experience is best.


Anime sometimes have unorthodox subtitle placements, it can be nice and also annoying.

Maybe it is a getting used to it thing, but it can be overwhelming.


If you want to recode all captions... Captions are coded to be rendered in particular places on the screen. It's a lot of work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: