by Dr Andy Corbett
Support Vector Machines
8. Support Vector Machines to Maximise Decision Margins
- Understanding how a Support Vector Machine Classifier works.
- Finding the distance from a data cluster to the decision boundary.
- Maximising classification margins.
- Solving the constrained optimisation problem.
- Identifying the support vectors.
Classification is not always clear cut. In Fig. 1, when life gives you two neatly separated piles of lemons (left), the decision boundary is simple, and SVMs will allow you to maximise the margin between them. When life mixes your lemons (right), we can still separate the piles allowing for the fewest number of mistakes.
Figure 1. Classification Examples.
Decision boundaries and supporting vectors
In classification tasks, we consider ways to label, or partition, data points into distinct categories called classes. For demonstration, let us first consider a binary classification task, for which we have measurements of training data, each corresponding to a target value . The learning task is to build a linear function
where the parameters and bias are optimised to fit the dataset . This function will predict the line between the two classes .
We shall see shortly that the linear-model assumption in SVMs may easily be made non-linear via the kernel trick.
Our strategy shall be to maximise the margin between the two classes; that is, to maximise the distance between and the closest data point(s), perpendicularly. Notice that any correctly classified point will satisfy . The margin distance is given by .
Figure. 2. The SVM's objective: maximise the margin width between the two classes based on the closest points to the central boundary.
This objective, to maximise the margin, may be reformulated as minimising the square-norm of the weights, , subject to the constraint that each point is on or outside the decision boundary.
An aside on rewriting the objective
For those interested, we can write our objective as Since we are free to rescale and by constants, we can without loss of generality assume that , with the minimum landing on exactly. The problem of maximising is then equivalent to minimising .
Solution to the optimisation problem
The answer to this problem is quite brilliant and goes back to ideas of mathematicians Lagrange and Euler in the 18th century. The constraint is inserted into the objective by multiplying by constants for each . We start with the model in Eq. (1). Differentiating this equation by and and setting the result to zero transforms the problem into a 'dual' form, whose solution takes the form
where denotes the Lagrange multiplier corresponding to the -th data point alongside the constraints
The final constraint implies that either , or and does not contribute to the final model sum .
This concept of margin introduces the notion of a special set of vectors, support vectors, that govern the final model. These are the vectors that sit on the edge of the margin, with . This solution is important for two reasons:
- The solution is sparse, and very lightweight. Even if we had a tremendously large amount of data (), the set of support vectors used in inference is typically far smaller.
- The support vectors make the (eventually) non-parametric model interpretable: they are the data points most delicately affecting the classification and can highlight unseen structure in the data.