Have been in the industry for quite a while, and I think one of the often-overlooked non-technical reasons is they are hard to get an intuitive feel of, compared to ANNs. This might seem irrelevant, but think about it - someone who doesn't have a degree/rigorous training in ML or a ML heavy program (and this population is big in the industry, people who are looking to get into ML, from say analytics or software dev) will try out things he/she can identify with. ANNs are positioned exactly right for this - they're sufficiently sophisticated, you can quickly get a high-level idea, and there is the attractive comparison to how our mind works. So that's what people try out early amongst advanced algos. The industry doesn't give you a lot of time to explore, so once you have invested time to pick up ANNs, you tend to hold on to the knowledge.
I've also conducted multiple training sessions/discussions with small groups on ML, and it supports what I've said. SVMs are hard to explain to a general crowd; with ANNs I've enough visual cues to get them started. Sure, they mightn't get the math right away, but they understand enough to be comfortable using a library.
As an aside, a comment here mentions Logistic/linear regression dominates the industry. I think it is for a similar reason. They're simple to understand and try out. That doesn't make them good models, in my experience, on a bunch of real-world problems.
Now if you ask me about the technical cons of SVMs, I'd say - scalability of non-linear kernels and the fact that I've to cherry-pick kernels. Linear and RBF kernels work well on most problems, but even then, for a bunch of problems where RBF seems to work well, the number of support vectors stored by the model can be massive. If I weren't to be pedantic about it and excuse the fact that the kernel seems to be "memorizing" more than "learning", this is still a beast to run in real time. nu-SVMs address this issue to an extent, but then we are back to picking the right kernel for the task. This is one thing I love about ANNs - the kernel (or what essentially is the kernel) is learned.
I've also conducted multiple training sessions/discussions with small groups on ML, and it supports what I've said. SVMs are hard to explain to a general crowd; with ANNs I've enough visual cues to get them started. Sure, they mightn't get the math right away, but they understand enough to be comfortable using a library.
As an aside, a comment here mentions Logistic/linear regression dominates the industry. I think it is for a similar reason. They're simple to understand and try out. That doesn't make them good models, in my experience, on a bunch of real-world problems.
Now if you ask me about the technical cons of SVMs, I'd say - scalability of non-linear kernels and the fact that I've to cherry-pick kernels. Linear and RBF kernels work well on most problems, but even then, for a bunch of problems where RBF seems to work well, the number of support vectors stored by the model can be massive. If I weren't to be pedantic about it and excuse the fact that the kernel seems to be "memorizing" more than "learning", this is still a beast to run in real time. nu-SVMs address this issue to an extent, but then we are back to picking the right kernel for the task. This is one thing I love about ANNs - the kernel (or what essentially is the kernel) is learned.