Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In complete sincerity, I think that speeding up Turing-complete probabilistic programming to the kinds of inference speed we can get in the gradient-descent training of deep neural networks would be a "change the world"-level advance for ML/AI.


We already have that - variational inference based algorithms like BBVI use gradient descent for training


Variational inference also only works for continuous probability models, so it can't be used for most interesting use-cases of probabilistic programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: