Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why wouldn't studios use a LLM for biometrics instead of using an individual's attributes? They could churn out new unique virtual actors continuously.


A LLM cannot output a likeness.

Deepfaking is still a manual process that cannot be automated, and the legal rights there are very up-in-the-air.


An LLM can't, but StableDiffusion can. Deepfakes are very much automated for still images, and it's really just a matter of time until it's automated for video and 3D.


Even with modern tricks for Stable Diffusion to focus on a specific subject (e.g. Dreambooth LoRAs/Textual Inversion) it is not anywhere close to the level of replacing actors.


Unreal Engine's MetaHumans is getting quite good. I assume in 20 years it will be on par or surpass the quality of actor likenesses in current movies.


The studios don't want a unique virtual actor. They want the one someone else has proven as successful. They aren't driving down Sunset Blvd asking to scan in unknowns. They want the big name stars.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: