Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From someone who used some SVMs in computer vision, before CNNs took over:

* If you don't know the "shape" of your data, you can at least try the "kitchen sink" approach; just try out a bunch of different tranformations and/or kernels, possibly in combination with each other, and see what works. This was a relatively popular approach in practice, but usually done with more rigor than I'm making it sound like. And there was even a popular method called "random kitchen sinks"! [1]

* You might not have a good idea of what a good transformation might be, but maybe you have some idea of how to measure similarity between data points? This could be a kernel, which is in some sense equivalent for SVMs.

* There is actually a fair amount of literature on learning transformations and kernels, or at least optimizing the parameters of kernels somehow. What you can do also depends on what kinds of labels (supervision) you have, if any.

* And finally, yeah, if you're at the point of trying to learn kernels or transformations, you're pretty close to what neural networks are already doing, and fairly successfully at the moment. At least in computer vision, one of the main reasons CNNs have largely replaced SVMs is that they can learn task-specific features given the right architecture/training method/hyperparameters and enough data. (What is the right architecture/training method/hyperparameters? Well... back to the "kitchen sink" method ;))

[1] (https://people.eecs.berkeley.edu/~brecht/papers/08.rah.rec.n...)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: