Menu Home

Soft margin is not as good as hard margin

This note is a link to an excerpt from my upcoming monster support vector machine article (where I work through a number of sections of [Vapnik, 1998] Vapnik, V. N. (1998), Statistical Learning Theory, Wiley). I try to run down how the original theoretical support vector machine claims are precisely linked to what is said about the common implementations. The write-up is fairly technical and very large (26 pages).

Here we are extracting an appendix: “Soft margin is not as good as hard margin.” In it we build a toy problem that is not large-margin separated and note that if the dimension of the concept space you were working in was not obvious (i.e. you were forced to rely on the margin derived portion of generalization bounds) then generalization improvement for a soft margin SVM is much slower than you would expect given experience from the hard margin theorems. The punch-line is: every time you get eight times as much training data you only halve your expected excess generalization error bound (whereas once you get below a data-set’s hard-margin bound you expect one to one reduction of the bound with respect to training data set size). What this points out is: the soft margin idea can simulate margin, but it comes at a cost.

The PDF extract is here and the work in iPython notebook form is here (though that obviously would take some set-up to run).

Categories: Mathematics

Tagged as:

jmount

Data Scientist and trainer at Win Vector LLC. One of the authors of Practical Data Science with R.

%d bloggers like this: