Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The software library is located here: http://edwardlib.org/ . Notably, Edward is layered on TensorFlow.

Regarding the significance of the authors, David Blei first described latent Dirichlet allocation (LDA), an important algorithm for generative topic modeling, in ~2003. Interestingly, the last I checked, LDA couldn't be done in Edward (yet).



I also briefly tried it out, being drawn to the claim of Turing completeness, but I wasn't able to get inference working over any model with interesting control flow (e.g. loops). It seemed to have about the same expressive power of PyMC3, albeit running over Tensorflow which seemed neat. It would be very cool to see something with the expressive power of, say, Church running on tf.


In complete sincerity, I think that speeding up Turing-complete probabilistic programming to the kinds of inference speed we can get in the gradient-descent training of deep neural networks would be a "change the world"-level advance for ML/AI.


We already have that - variational inference based algorithms like BBVI use gradient descent for training


Variational inference also only works for continuous probability models, so it can't be used for most interesting use-cases of probabilistic programming.


Why do you want Turing completeness in your probabilistic modelling language? This seems like a domain where you can specify a lot of useful work with bounded loops and other sub-TC tools.


The probability of \Bot is 0 because sampling \Bot requires returning from computational digression. That said, there's all kinds of interesting control flow we can describe in a program, knowing it will return a sample, without having any convenient way to prove to a termination-checker that it will.


Fun fact, LDA was actually first described by three geneticists in 2000:

http://www.genetics.org/content/155/2/945


Yes! Pritchard remains extremely well known for this and subsequent work. If only they had given it a catchy title ;-)


Kevin Murphy wrote the Bayes Net Toolbox which got me started in the area.

Anyway, this paper is really neat. As far as I can tell, it's a big step towards linking theories in Bayesian networks and neural networks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: