I found the Stanford course almost assumed too much of a stats background to make it easily accessible. Starting with the math foundations is sound, but scary for people who don't dream in LaTeX :)
Good stuff - I added http://szeliski.org/Book/ to my reading list, thanks for sharing the link. There's a huge overlap for certain classes of problems. CV in many ways resembles the same problems with large online data streams, noisy, time critical, huge volumes of data - feature extraction is problematic. Hybrid solutions usually required.
I've found that reading books from other ML domains helps out in understanding the application and getting ideas on how to approach the problem.
As I experienced it, Szeliski's book is better as a reference as it covers lots of material (just see the number of citations at the end). I don't think it's an easy read without reading (some of) the cited papers (or having background knowledge).
I read that one, I liked the fact that he builds up each example from first principles. It's hard to find explanations that bridge theory and practice.
They have slightly different philosophies about how they work with their companies. They also have different feels from the perspective of the entrepreneurs. (Not even going to go near the flame bait of trying to sum up the diff --brief for the two companies.)
Also - the different TechStars have a different feel depending on which one you're involved in (e.g. Boston ain't Boulder ;) )
I would talk to a few of the more recent enrollees and a few of the graduates in both and ask them for their perspective.