by Prof Tim Dodwell
General Linear Models
2. The Foundations of ML - Curve Fitting
In this explainer we will give a general introduction to linear models.
- We will talk through what a linear model is?
- How it works for simple linear models, like fitting polynomials which we probably know about before.
- Then we look how we can build linear model, which are more expressive.
- We will show how linear models are trained, both for example feature problems and data sets, and for much larger problems.
- Finally we will touch on the interpretation of bias, which comes naturally out of linear models.
Starter Question. Is the following a linear model?
You should be able to confidently answer this question, and give a reason for your answer by the end of this explainer!
So in this explainer, we are very much looking at supervised learning methods. For this we will start with introducing Generalised Linear Models (or GLMs) in the context of regression.
A regression task is to predict one or more continous target variables , given a -dimesional vector of input variables.
So why are GLMs super important?
- The foundations for introducing the core concepts in machine learning.
- They are widely used and (can be) highly effective.
- If possible to build, they are often more explainable that other models, in because with our understand of "correlation" as a central concept.
- They provide a stepping stone to understand more complex machine learning models, in particular Neural Networks and Gaussian Processes.
Fitting Polynomials
So we are probably used to fitting straight lines through data. I remember doing this at school
where has the interpretation of the intercept and is the gradient (or slope) of the curve.
We can extend this to high-dimensional inputs, so what we mean by this is that there are two input variables , and so
is a linear model (of linear functions) for a two dimensional input. Equally we can have models which are higher order polynomials, where we fit quadratic, cubic or high order curves through the data, so in general we have
So the key point here is that when we talk about linear models. It doesn't mean the functions themselves are linear (although they can be). So we can also go higher dimensions (more than one input) and higher order
for example, but can be arbitary complex
A More "General" Linear Model
A linear model is defined by the linear combination of functions. These functions which are known as basis functions or in Machine Learning features, and can be any general nonlinear function.
In general we can write a linear model as follows
where each of the 's (or weights) are scalar numbers or to be fancy we write . That is belongs to the set of real numbers. In machine learning you might also see (and will see me write most the time) such a linear model written in vector form as follows
Maybe to labour the point a little, this is just a more general case, so if we think of our original model we have , then this is simple
and hence
It is often convention to reserve the first feature function, i.e. to be the constant function. This means that , and therefore
The parameter therefore allows us to model any fixed offset in the data, and with that is often called the bias parameter (different from bias in a statistical sense).
The basis or feature functions themselves can be pre-defined. So for example for a single input parameter example the basis could be powers of , i.e. . Other options include Mexican Hat Function, where basis functions are defined on overlapping regions (think finite element shape functions) such that
Other basis functions include
where are determined by a general point and a length scale . These are often called Gaussian basis functions. Or another common option is the sigmoidal basis functions which are defined by
To see what these look like on a plot, let's have a look - polynomials on the left, Gaussians in the middle and sigmoid basis functions on the right.
We will also see later in the course that the basis functions don't have to be well defined formulae as we have above. They can actually be discovered from the data. Whilst outside the scope of this explainer, but will be covered else where, unsupervised machine learning models can be used. For those interested in looking ahead consider reading about Principle Component Analysis (aka PCA) and Autoencoders. More on that later.
Training / Fitting a Linear Model
Important concept now is that given a set of basis functions , we now wish to find the weights which minimise the error on the training data - so we need to introduce the idea of fit!
So a very natural way to define error is for pairs of training data , we can define the Mean-square loss function for a given set of weights
Our objective is therefore to minimise this loss function.
A natural way to do this is to find the gradient of error with respect to the weight's , and use this information to learn how to adjust (for nonlinear case) or pick the weights to minimise the error. The gradient basically tells us how much the error will increase/decrease if we increase/decrease the value of . Optimisation strategies then seek to change the value of the weights to reduce the error.
When we have reached a minimum error, then the gradient with respect to each of the weights will be zero.
In general, for nonlinear model, we would use some gradient based optimisation strategy for finding this optimal set of weights which minimises the error. Here, due to the linearity of the model, the solution can be calculated directly in closed-form. We won't go into the detail here, but give the solution. Let us first define the matrix, which is a rectangulat matrix often refered to as the design matrix
Then the "optimal" weights for the model are given by
We see that the calculation involves taking the inverse of a dense matrix i.e. . This will be computationally expensive if the number of features and the number of data points is large. The time for calculation grows quadratically with and linearly in . For large numbers of features () and therefore weights in such case we seek to use optimisation schemes like gradient descent.
Role of the bias parameter
We will see in a work sample, how we can implement this and fit a linear model below.
Before we do this. It is a good opportunity to look at the role of the bias parameter, remembering that we conventionally choose . So if we look at what happens by setting the gradient of the error with respect to to zero
So let us rewrite this a little noting that , then
Ok, so all this equal zero, which means
What we notice is that the parameter makes a constant shift which corrects between the averages over the training set of the target variables and the weighted sum of averages of the basis functions evaluated at input points.