The software library is located here: http://edwardlib.org/ . Notably, Edward is layered on TensorFlow.
Regarding the significance of the authors, David Blei first described latent Dirichlet allocation (LDA), an important algorithm for generative topic modeling, in ~2003. Interestingly, the last I checked, LDA couldn't be done in Edward (yet).
I also briefly tried it out, being drawn to the claim of Turing completeness, but I wasn't able to get inference working over any model with interesting control flow (e.g. loops). It seemed to have about the same expressive power of PyMC3, albeit running over Tensorflow which seemed neat. It would be very cool to see something with the expressive power of, say, Church running on tf.
In complete sincerity, I think that speeding up Turing-complete probabilistic programming to the kinds of inference speed we can get in the gradient-descent training of deep neural networks would be a "change the world"-level advance for ML/AI.
Variational inference also only works for continuous probability models, so it can't be used for most interesting use-cases of probabilistic programming.
Why do you want Turing completeness in your probabilistic modelling language? This seems like a domain where you can specify a lot of useful work with bounded loops and other sub-TC tools.
The probability of \Bot is 0 because sampling \Bot requires returning from computational digression. That said, there's all kinds of interesting control flow we can describe in a program, knowing it will return a sample, without having any convenient way to prove to a termination-checker that it will.
Regarding the significance of the authors, David Blei first described latent Dirichlet allocation (LDA), an important algorithm for generative topic modeling, in ~2003. Interestingly, the last I checked, LDA couldn't be done in Edward (yet).