Depending on how long it takes to "evaluate the goodness of a solution" techniques like multi-start gradient decent can rapidly become intractable though, especially in higher dimensions. There are a handful of open source libraries out there that try to tackle this time consuming and expensive black box optimization problem from a more Bayesian approach [1] [2]. There's also a YC company to do this as a service (full disclosure, I'm a co-founder) [3]. Combining these methods with TensorFlow could be very powerful and is one step closer to more automatic ML flows though.
Yes, if you could expose the parameters to MOE from within TensorFlow. We're going to work on an example of showing how this can be done with SigOpt, which has a similar interface.
[1]: https://github.com/Yelp/MOE (full disclosure, I co-wrote this)
[2]: https://github.com/hyperopt/hyperopt
[3]: https://sigopt.com/cases/machine_learning