I graduated in 2006 (undergrad CS degree), and at the time, we were told SVMs were a lot more practical than something like neural nets. Neural nets were framed as "we will teach you this thing because it's fun to code back-prop and it kind of works in a way we think your brain does too, but no one really uses them in real-life, except for classifying digits".
Funny how that happens. In 2006 I was taken a grad-level Intro to ML course. After the first (and only) lecture on neural nets, I asked the prof on recommendations for learning more - since I had mostly a cogsci background, I was pretty interested because of the "neural" part. The prof essentially said the same as yours (I don't blame him -- it was a common sentiment of the time). And of course, these days he's doing deep learning!
Even in 2014 (grad cs degree), my data mining professor said SVM were advantageous over neural nets due to local maxima problem, so we never learned neural nets in class.
That's a bit weird. Sure it's an advantage if you can optimize the object function more easily. But the end goal is generalization, i.e. performance on new data. The objective function is only a proxy for that.
Funny how times have changed.